You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law. You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes a description of these human governance processes and assessment of their timeliness, accessibility, outcomes, and effectiveness should be made public whenever possible.

This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.

There are many reasons people may prefer not to use an automated system: the system can be flawed and can lead to unintended outcomes; it may reinforce bias or be inaccessible; it may simply be inconvenient or unavailable; or it may replace a paper or manual process to which people had grown accustomed. Yet members of the public are often presented with no alternative, or are forced to endure a cumbersome process to reach a human decision-maker once they decide they no longer want to deal exclusively with the automated system or be impacted by its results. As a result of this lack of human reconsideration, many receive delayed access, or lose access, to rights, opportunities, benefits, and critical services. The American public deserves the assurance that, when rights, opportunities, or access are meaningfully at stake and there is a reasonable expectation of an alternative to an automated system, they can conveniently opt out of an automated system and will not be disadvantaged for that choice. In some cases, such a human or other alternative may be required by law, for example it could be required as “reasonable accommodations” for people with disabilities.

In addition to being able to opt out and use a human alternative, the American public deserves a human fallback system in the event that an automated system fails or causes harm. No matter how rigorously an automated system is tested, there will always be situations for which the system fails. The American public deserves protection via human review against these outlying or unexpected scenarios. In the case of time-critical systems, the public should not have to wait—immediate human consideration and fallback should be available. In many time-critical systems, such a remedy is already immediately available, such as a building manager who can open a door in the case an automated card access system fails.

In the criminal justice system, employment, education, healthcare, and other sensitive domains, automated systems are used for many purposes, from pre-trial risk assessments and parole decisions to technologies that help doctors diagnose disease. Absent appropriate safeguards, these technologies can lead to unfair, inaccurate, or dangerous outcomes. These sensitive domains require extra protections. It is critically important that there is extensive human oversight in such settings.

These critical protections have been adopted in some scenarios. Where automated systems have been introduced to provide the public access to government benefits, existing human paper and phone-based processes are generally still in place, providing an important alternative to ensure access. Companies that have introduced automated call centers often retain the option of dialing zero to reach an operator. When automated identity controls are in place to board an airplane or enter the country, there is a person supervising the systems who can be turned to for help or to appeal a misidentification.

The American people deserve the reassurance that such procedures are in place to protect their rights, opportunities, and access. People make mistakes, and a human alternative or fallback mechanism will not always have the right answer, but they serve as an important check on the power and validity of automated systems.

  • An automated signature matching system is used as part of the voting process in many parts of the country to determine whether the signature on a mail-in ballot matches the signature on file. These signature matching systems are less likely to work correctly for some voters, including voters with mental or physical disabilities, voters with shorter or hyphenated names, and voters who have changed their name.[i] A human curing process,[ii] which helps voters to confirm their signatures and correct other voting mistakes, is important to ensure all votes are counted,[iii] and it is already standard practice in much of the country for both an election official and the voter to have the opportunity to review and correct any such issues.[iv]
  • An unemployment benefits system in Colorado required, as a condition of accessing benefits, that applicants have a smartphone in order to verify their identity. No alternative human option was readily available, which denied many people access to benefits.[v]
  • A fraud detection system for unemployment insurance distribution incorrectly flagged entries as fraudulent, leading to people with slight discrepancies or complexities in their files having their wages withheld and tax returns seized without any chance to explain themselves or receive a review by a person.[vi]
  • A patient was wrongly denied access to pain medication when the hospital’s software confused her medication history with that of her dog’s. Even after she tracked down an explanation for the problem, doctors were afraid to override the system, and she was forced to go without pain relief due to the system’s error.[vii]
  • A large corporation automated performance evaluation and other HR functions, leading to workers being fired by an automated system without the possibility of human review, appeal or other form of recourse.[viii]

The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.

An automated system should provide demonstrably effective mechanisms to opt out in favor of a human alternative, where appropriate, as well as timely human consideration and remedy by a fallback system, with additional human oversight and safeguards for systems used in sensitive domains, and with training and assessment for any human-based portions of the system to ensure effectiveness.

Provide a mechanism to conveniently opt out from automated systems in favor of a human alternative, where appropriate

  • Brief, clear, accessible notice and instructions. Those impacted by an automated system should be given a brief, clear notice that they are entitled to opt-out, along with clear instructions for how to opt-out. Instructions should be provided in an accessible form and should be easily findable by those impacted by the automated system. The brevity, clarity, and accessibility of the notice and instructions should be assessed (e.g., via user experience research).
  • Human alternatives provided when appropriate. In many scenarios, there is a reasonable expectation of human involvement in attaining rights, opportunities, or access. When automated systems make up part of the attainment process, alternative timely human-driven processes should be provided. The use of a human alternative should be triggered by an opt-out process.
  • Timely and not burdensome human alternative. Opting out should be timely and not unreasonably burdensome in both the process of requesting to opt-out and the human-driven alternative provided.

Provide timely human consideration and remedy by a fallback and escalation system if an automated system fails, produces error, or you would like to appeal or contest its impacts on you

  • Proportionate. The availability of human consideration and fallback, along with associated training and safeguards against human bias, should be proportionate to the potential of the automated system to meaningfully impact rights, opportunities, or access. Automated systems that have greater control over outcomes, provide input to high-stakes decisions, relate to sensitive domains, or otherwise have a greater potential to meaningfully impact rights, opportunities, or access should have greater availability (e.g., staffing) and oversight of human consideration and fallback mechanisms.
  • Accessible. Mechanisms for human consideration and fallback, whether in-person, on paper, by phone, or otherwise provided, should be easy to find and use. These mechanisms should be tested to ensure that users who have trouble with the automated system are able to use human consideration and fallback, with the understanding that it may be these users who are most likely to need the human assistance. Similarly, it should be tested to ensure that users with disabilities are able to find and use human consideration and fallback and also request reasonable accommodations or modifications.
  • Convenient. Mechanisms for human consideration and fallback should not be unreasonably burdensome as compared to the automated system’s equivalent.
  • Equitable. Consideration should be given to ensuring outcomes of the fallback and escalation system are equitable when compared to those of the automated system and such that the fallback and escalation system provides equitable access to underserved communities.[ix]
  • Timely. Human consideration and fallback are only useful if they are conducted and concluded in a timely manner. The determination of what is timely should be made relative to the specific automated system, and the review system should be staffed and regularly assessed to ensure it is providing timely consideration and fallback. In time-critical systems, this mechanism should be immediately available or, where possible, available before the harm occurs. Time-critical systems include, but are not limited to, voting-related systems, automated building access and other access systems, systems that form a critical component of healthcare, and systems that have the ability to withhold wages or otherwise cause immediate financial penalties.
  • Effective. The organizational structure surrounding processes for consideration and fallback should be designed so that if the human decision-maker charged with reassessing a decision determines that it should be overruled, the new decision will be effectively enacted. This includes ensuring that the new decision is entered into the automated system throughout its components, any previous repercussions from the old decision are also overturned, and safeguards are put in place to help ensure that future decisions do not result in the same errors.
  • Maintained. The human consideration and fallback process and any associated automated processes should be maintained and supported as long as the relevant automated system continues to be in use.

Institute training, assessment, and oversight to combat automation bias and ensure any human-based components of a system are effective.

  • Training and assessment. Anyone administering, interacting with, or interpreting the outputs of an automated system should receive training in that system, including how to properly interpret outputs of a system in light of its intended purpose and in how to mitigate the effects of automation bias. The training should reoccur regularly to ensure it is up to date with the system and to ensure the system is used appropriately. Assessment should be ongoing to ensure that the use of the system with human involvement provides for appropriate results, i.e., that the involvement of people does not invalidate the system’s assessment as safe and effective or lead to algorithmic discrimination.
  • Oversight. Human-based systems have the potential for bias, including automation bias, as well as other concerns that may limit their effectiveness. The results of assessments of the efficacy and potential bias of such human-based systems should be overseen by governance structures that have the potential to update the operation of the human-based system in order to mitigate these effects.

Implement additional human oversight and safeguards for automated systems related to sensitive domains

Automated systems used within sensitive domains, including criminal justice, employment, education, and health, should meet the expectations laid out throughout this framework, especially avoiding capricious, inappropriate, and discriminatory impacts of these technologies. Additionally, automated systems used within sensitive domains should meet these expectations:

  • Narrowly scoped data and inferences. Human oversight should ensure that automated systems in sensitive domains are  narrowly scoped to address a defined goal, justifying each included data item or attribute as relevant to the specific use case. Data included should be carefully limited to avoid algorithmic discrimination resulting from, e.g., use of community characteristics, social network analysis, or group-based inferences.
  • Tailored to the situation. Human oversight should ensure that automated systems in sensitive domains are tailored to the specific use case and real-world deployment scenario, and evaluation testing should show that the system is safe and effective for that specific situation. Validation testing performed based on one location or use case should not be assumed to transfer to another.
  • Human consideration before any high-risk decision. Automated systems, where they are used in sensitive domains, may play a role in directly providing information or otherwise providing positive outcomes to impacted people. However, automated systems should not be allowed to directly intervene in high-risk situations, such as sentencing decisions or medical care, without human consideration.
  • Meaningful access to examine the system. Designers, developers, and deployers of automated systems should consider limited waivers of confidentiality (including those related to trade secrets) where necessary in order to provide meaningful oversight of systems used in sensitive domains, incorporating measures to protect intellectual property and trade secrets from unwarranted disclosure as appropriate. This includes (potentially private and protected) meaningful access to source code, documentation, and related data during any associated legal discovery, subject to effective confidentiality or court orders. Such meaningful access should include (but is not limited to) adhering to the principle on Notice and Explanation using the highest level of risk so the system is designed with built-in explanations; such systems should use fully-transparent models where the model itself can be understood by people needing to directly examine it.

Demonstrate access to human alternatives, consideration, and fallback

  • Reporting. Reporting should include an assessment of timeliness and the extent of additional burden for human alternatives, aggregate statistics about who chooses the human alternative, along with the results of the assessment about brevity, clarity, and accessibility of notice and opt-out instructions. Reporting on the accessibility, timeliness, and effectiveness of human consideration and fallback should be made public at regular intervals for as long as the system is in use. This should include aggregated information about the number and type of requests for consideration, fallback employed, and any repeated requests; the timeliness of the handling of these requests, including mean wait times for different types of requests as well as maximum wait times; and information about the procedures used to address requests for consideration along with the results of the evaluation of their accessibility. For systems used in sensitive domains, reporting should include information about training and governance procedures for these technologies. Reporting should also include documentation of goals and assessment of meeting those goals, consideration of data included, and documentation of the governance of reasonable access to the technology. Reporting should be provided in a clear and machine-readable manner.

Real-life examples of how these principles can become reality, through laws, policies, and practical technical and sociotechnical approaches to protecting rights, opportunities, and access.

  • Healthcare “navigators” help people find their way through online signup forms to choose and obtain healthcare. A Navigator is “an individual or organization that’s trained and able to help consumers, small businesses, and their employees as they look for health coverage options through the Marketplace [a government web site], including completing eligibility and enrollment forms.”[x] For the 2022 plan year, the Biden-Harris Administration increased funding so that grantee organizations could “train and certify more than 1,500 Navigators to help uninsured consumers find affordable and comprehensive health coverage.”[xi]
  • The customer service industry has successfully integrated automated services such as chatbots and AI-driven call response systems with escalation to a human support team.[xii] Many businesses now use partially automated customer service platforms that help answer customer questions and compile common problems for human agents to review. These integrated human-AI systems allow companies to provide faster customer care while maintaining human agents to answer calls or otherwise respond to complicated requests. Using both AI and human agents is viewed as key to successful customer service.[xiii]
  • Ballot curing laws in at least 24 states require a fallback system that allows voters to correct their ballot and have it counted in the case that a voter signature matching algorithm incorrectly flags their ballot as invalid or there is another issue with their ballot, and review by an election official does not rectify the problem. Some federal courts have found that such cure procedures are constitutionally required. [xiv] Ballot curing processes vary among states, and include direct phone calls, emails, or mail contact by election officials.[xv] Voters are asked to provide alternative information or a new signature to verify the validity of their ballot.

[i] Kyle Wiggers. Automatic signature verification software threatens to disenfranchise U.S. voters. VentureBeat. Oct. 25, 2020. https://venturebeat.com/2020/10/25/automatic-signature-verification-software-threatens-to-disenfranchise-u-s-voters/

[ii] Ballotpedia. Cure period for absentee and mail-in ballots. Article retrieved Apr 18, 2022. https://ballotpedia.org/Cure_period_for_absentee_and_mail-in_ballots

[iii] Larry Buchanan and Alicia Parlapiano. Two of these Mail Ballot Signatures are by the Same Person. Which Ones? New York Times. Oct. 7, 2020. https://www.nytimes.com/interactive/2020/10/07/upshot/mail-voting-ballots-signature-matching.html

[iv] Rachel Orey and Owen Bacskai. The Low Down on Ballot Curing. Nov. 04, 2020. https://bipartisanpolicy.org/blog/the-low-down-on-ballot-curing/

[v] Andrew Kenney. ‘I’m shocked that they need to have a smartphone’: System for unemployment benefits exposes digital divide. USA Today. May 2, 2021. https://www.usatoday.com/story/tech/news/2021/05/02/unemployment-benefits-system-leaving-people-behind/4915248001/

[vi] Allie Gross. UIA lawsuit shows how the state criminalizes the unemployed. Detroit Metro-Times. Sep. 18, 2015. https://www.metrotimes.com/news/uia-lawsuit-shows-how-the-state-criminalizes-the-unemployed-2369412

[vii] Maia Szalavitz. The Pain Was Unbearable. So Why Did Doctors Turn Her Away? Wired. Aug. 11, 2021. https://www.wired.com/story/opioid-drug-addiction-algorithm-chronic-pain/

[viii] Spencer Soper. Fired by Bot at Amazon: “It’s You Against the Machine”. Bloomberg, Jun. 28, 2021. https://www.bloomberg.com/news/features/2021-06-28/fired-by-bot-amazon-turns-to-machine-managers-and-workers-are-losing-out

[ix] Definitions of ‘equity’ and ‘underserved communities’ can be found in the Definitions section of this document as well as in Executive Order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government: https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/

[x] HealthCare.gov. Navigator – HealthCare.gov Glossary. Accessed May 2, 2022. https://www.healthcare.gov/glossary/navigator/

[xi] Centers for Medicare & Medicaid Services. Biden-Harris Administration Quadruples the Number of Health Care Navigators Ahead of HealthCare.gov Open Enrollment Period. Aug. 27, 2021. https://www.cms.gov/newsroom/press-releases/biden-harris-administration-quadruples-number-health-care-navigators-ahead-healthcaregov-open

[xii] See, e.g., McKinsey & Company. The State of Customer Care in 2022. July 8, 2022. https://www.mckinsey.com/business-functions/operations/our-insights/the-state-of-customer-care-in-2022; Sara Angeles. Customer Service Solutions for Small Businesses. Business News Daily. Jun. 29, 2022. https://www.businessnewsdaily.com/7575-customer-service-solutions.html 

[xiii] Mike Hughes. Are We Getting The Best Out Of Our Bots? Co-Intelligence Between Robots & Humans. Forbes. Jul. 14, 2022. https://www.forbes.com/sites/mikehughes1/2022/07/14/are-we-getting-the-best-out-of-our-bots-co-intelligence-between-robots–humans/?sh=16a2bd207395

[xiv] Rachel Orey and Owen Bacskai. The Low Down on Ballot Curing. Nov. 04, 2020. https://bipartisanpolicy.org/blog/the-low-down-on-ballot-curing/; Zahavah Levine and Thea Raymond-Seidel. Mail Voting Litigation in 2020, Part IV: Verifying Mail Ballots. Oct. 29, 2020. https://www.lawfareblog.com/mail-voting-litigation-2020-part-iv-verifying-mail-ballots.

[xv] National Conference of State Legislatures. Table 15: States With Signature Cure Processes. Jan. 18, 2022. https://www.ncsl.org/research/elections-and-campaigns/vopp-table-15-states-that-permit-voters-to-correct-signature-discrepancies.aspx.

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top