Notice and Explanation
You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you
You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes. You should know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the sole input determining the outcome. Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context. Reporting that includes summary information about these automated systems in plain language and assessments of the clarity and quality of the notice and explanations should be made public whenever possible.
This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.
Automated systems now determine opportunities, from employment to credit, and directly shape the American public’s experiences, from the courtroom to online classrooms, in ways that profoundly impact people’s lives. But this expansive impact is not always visible. An applicant might not know whether a person rejected their resume or a hiring algorithm moved them to the bottom of the list. A defendant in the courtroom might not know if a judge denying their bail is informed by an automated system that labeled them “high risk.” From correcting errors to contesting decisions, people are often denied the knowledge they need to address the impact of automated systems on their lives. Notice and explanations also serve an important safety and efficacy purpose, allowing experts to verify the reasonableness of a recommendation before enacting it.
In order to guard against potential harms, the American public needs to know if an automated system is being used. Clear, brief, and understandable notice is a prerequisite for achieving the other protections in this framework. Likewise, the public is often unable to ascertain how or why an automated system has made a decision or contributed to a particular outcome. The decision-making processes of automated systems tend to be opaque, complex, and, therefore, unaccountable, whether by design or by omission. These factors can make explanations both more challenging and more important, and should not be used as a pretext to avoid explaining important decisions to the people impacted by those choices. In the context of automated systems, clear and valid explanations should be recognized as a baseline requirement.
Providing notice has long been a standard practice, and in many cases is a legal requirement, when, for example, making a video recording of someone (outside of a law enforcement or national security context). In some cases, such as credit, lenders are required to provide notice and explanation to consumers. Techniques used to automate the process of explaining such systems are under active research and improvement and such explanations can take many forms. Innovative companies and researchers are rising to the challenge and creating and deploying explanatory systems that can help the public better understand decisions that impact them.
While notice and explanation requirements are already in place in some sectors or situations, the American public deserve to know consistently and across sectors if an automated system is being used in a way that impacts their rights, opportunities, or access. This knowledge should provide confidence in how the public is being treated, and trust in the validity and reasonable use of automated systems.
- A lawyer representing an older client with disabilities who had been cut off from Medicaid-funded home health-care assistance couldn’t determine why, especially since the decision went against historical access practices. In a court hearing, the lawyer learned from a witness that the state in which the older client lived had recently adopted a new algorithm to determine eligibility.[i] The lack of a timely explanation made it harder to understand and contest the decision.
- A formal child welfare investigation is opened against a parent based on an algorithm and without the parent ever being notified that data was being collected and used as part of an algorithmic child maltreatment risk assessment.[ii] The lack of notice or an explanation makes it harder for those performing child maltreatment assessments to validate the risk assessment and denies parents knowledge that could help them contest a decision.
- A predictive policing system claimed to identify individuals at greatest risk to commit or become the victim of gun violence (based on automated analysis of social ties to gang members, criminal histories, previous experiences of gun violence, and other factors) and led to individuals being placed on a watch list with no explanation or public transparency regarding how the system came to its conclusions.[iii] Both police and the public deserve to understand why and how such a system is making these determinations.
- A system awarding benefits changed its criteria invisibly. Individuals were denied benefits due to data entry errors and other system flaws. These flaws were only revealed when an explanation of the system was demanded and produced.[iv] The lack of an explanation made it harder for errors to be corrected in a timely manner.
The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.
An automated system should provide demonstrably clear, timely, understandable, and accessible notice of use, and explanations as to how and why a decision was made or an action was taken by the system. These expectations are explained below.
Provide clear, timely, understandable, and accessible notice of use and explanations.
- Generally accessible plain language documentation. The entity responsible for using the automated system should ensure that documentation describing the overall system (including any human components) is public and easy to find. The documentation should describe, in plain language, how the system works and how any automated component is used to determine an action or decision. It should also include expectations about reporting described throughout this framework, such as the algorithmic impact assessments described as part of Algorithmic Discrimination Protections.
- Accountable. Notices should clearly identify the entity responsible for designing each component of the system and the entity using it.
- Timely and up-to-date. Users should receive notice of the use of automated systems in advance of using or while being impacted by the technology. An explanation should be available with the decision itself, or soon thereafter. Notice should be kept up-to-date and people impacted by the system should be notified of use case or key functionality changes.
- Brief and clear. Notices and explanations should be assessed, such as by research on users’ experiences, including user testing, to ensure that the people using or impacted by the automated system are able to easily find notices and explanations, read them quickly, and understand and act on them. This includes ensuring that notices and explanations are accessible to users with disabilities and are available in the language(s) and reading level appropriate for the audience. Notices and explanations may need to be available in multiple forms, (e.g., on paper, on a physical sign, or online), in order to meet these expectations and to be accessible to the American public.
Provide explanations as to how and why a decision was made or an action was taken by an automated system
- Tailored to the purpose. Explanations should be tailored to the specific purpose for which the user is expected to use the explanation, and should clearly state that purpose. An informational explanation might differ from an explanation provided to allow for the possibility of recourse, an appeal, or one provided in the context of a dispute or contestation process. For the purposes of this framework, ‘explanation’ should be construed broadly. An explanation need not be a plain-language statement about causality but could consist of any mechanism that allows the recipient to build the necessary understanding and intuitions to achieve the stated purpose. Tailoring should be assessed (e.g., via user experience research).
- Tailored to the target of the explanation. Explanations should be targeted to specific audiences and clearly state that audience. An explanation provided to the subject of a decision might differ from one provided to an advocate, or to a domain expert or decision maker. Tailoring should be assessed (e.g., via user experience research).
- Tailored to the level of risk. An assessment should be done to determine the level of risk of the automated system. In settings where the consequences are high as determined by a risk assessment, or extensive oversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should be built into the system design so that the system’s full behavior can be explained in advance (i.e., only fully transparent models should be used), rather than as an after-the-decision interpretation. In other settings, the extent of explanation provided should be tailored to the risk level.
- Valid. The explanation provided by a system should accurately reflect the factors and the influences that led to a particular decision, and should be meaningful for the particular customization based on purpose, target, and level of risk. While approximation and simplification may be necessary for the system to succeed based on the explanatory purpose and target of the explanation, or to account for the risk of fraud or other concerns related to revealing decision-making information, such simplifications should be done in a scientifically supportable way. Where appropriate based on the explanatory system, error ranges for the explanation should be calculated and included in the explanation, with the choice of presentation of such information balanced with usability and overall interface complexity concerns.
Demonstrate protections for notice and explanation
- Reporting. Summary reporting should document the determinations made based on the above considerations, including: the responsible entities for accountability purposes; the goal and use cases for the system, identified users, and impacted populations; the assessment of notice clarity and timeliness; the assessment of the explanation’s validity and accessibility; the assessment of the level of risk; and the account and assessment of how explanations are tailored, including to the purpose, the recipient of the explanation, and the level of risk. Individualized profile information should be made readily available to the greatest extent possible that includes explanations for any system impacts or inferences. Reporting should be provided in a clear plain language and machine-readable manner.
Real-life examples of how these principles can become reality, through laws, policies, and practical technical and sociotechnical approaches to protecting rights, opportunities, and access.
- People in Illinois are given written notice by the private sector if their biometric information is used. The Biometric Information Privacy Act enacted by the state contains a number of provisions concerning the use of individual biometric data and identifiers. Included among them is a provision that no private entity may “collect, capture, purchase, receive through trade, or otherwise obtain” such information about an individual, unless written notice is provided to that individual or their legally appointed representative.[v]
- Major technology companies are piloting new ways to communicate with the public about their automated technologies. For example, a collection of non-profit organizations and companies have worked together to develop a framework that defines operational approaches to transparency for machine learning systems.[vi] This framework, and others like it,[vii] inform the public about the use of these tools, going beyond simple notice to include reporting elements such as safety evaluations, disparity assessments, and explanations of how the systems work.
- Lenders are required by federal law to notify consumers about certain decisions made about them. Both the Fair Credit Reporting Act and the Equal Credit Opportunity Act require in certain circumstances that consumers who are denied credit receive “adverse action” notices. Anyone who relies on the information in a credit report to deny a consumer credit must, under the Fair Credit Reporting Act, provide an “adverse action” notice to the consumer, which includes “notice of the reasons a creditor took adverse action on the application or on an existing credit account.”[viii] In addition, under the risk-based pricing rule,[ix] lenders must either inform borrowers of their credit score, or else tell consumers when “they are getting worse terms because of information in their credit report.” The CFPB has also asserted that “[t]he law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand.”[x] Such explanations illustrate a shared value that certain decisions need to be explained.
- A California law requires that warehouse employees are provided with notice and explanation about quotas, potentially facilitated by automated systems, that apply to them. Warehousing employers in California that use quota systems (often facilitated by algorithmic monitoring systems) are required to provide employees with a written description of each quota that applies to the employee, including “quantified number of tasks to be performed or materials to be produced or handled, within the defined time period, and any potential adverse employment action that could result from failure to meet the quota.”[xi]
- Across the federal government, agencies are conducting and supporting research on explainable AI systems. The NIST is conducting fundamental research on the explainability of AI systems. A multidisciplinary team of researchers aims to develop measurement methods and best practices to support the implementation of core tenets of explainable AI.[xii] The Defense Advanced Research Projects Agency has a program on Explainable Artificial Intelligence that aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance (prediction accuracy), and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.[xiii] The National Science Foundation’s program on Fairness in Artificial Intelligence also includes a specific interest in research foundations for explainable AI.[xiv]
[i] Karen Hao. The coming war on the hidden algorithms that trap people in poverty. MIT Tech Review. Dec. 4, 2020. https://www.technologyreview.com/2020/12/04/1013068/algorithms-create-a-poverty-trap-lawyers-fight-back/
[ii] Anjana Samant, Aaron Horowitz, Kath Xu, and Sophie Beiers. Family Surveillance by Algorithm. ACLU. Accessed May 2, 2022. https://www.aclu.org/fact-sheet/family-surveillance-algorithm
[iii] Mick Dumke and Frank Main. A look inside the watch list Chicago police fought to keep secret. The Chicago Sun Times. May 18, 2017. https://chicago.suntimes.com/2017/5/18/18386116/a-look-inside-the-watch-list-chicago-police-fought-to-keep-secret
[iv] Jay Stanley. Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case. ACLU. Jun. 2, 2017. https://www.aclu.org/blog/privacy-technology/pitfalls-artificial-intelligence-decisionmaking-highlighted-idaho-aclu-case
[v] Illinois General Assembly. Biometric Information Privacy Act. Effective Oct. 3, 2008. https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57
[vi] Partnership on AI. ABOUT ML Reference Document. Accessed May 2, 2022. https://partnershiponai.org/paper/about-ml-reference-document/1/
[vii] See, e.g., the model cards framework: Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 220–229. https://dl.acm.org/doi/10.1145/3287560.3287596
[viii] Sarah Ammermann. Adverse Action Notice Requirements Under the ECOA and the FCRA. Consumer Compliance Outlook. Second Quarter 2013. https://consumercomplianceoutlook.org/2013/second-quarter/adverse-action-notice-requirements-under-ecoa-fcra/
[ix] Federal Trade Commission. Using Consumer Reports for Credit Decisions: What to Know About Adverse Action and Risk-Based Pricing Notices. Accessed May 2, 2022. https://www.ftc.gov/business-guidance/resources/using-consumer-reports-credit-decisions-what-know-about-adverse-action-risk-based-pricing-notices#risk
[x] Consumer Financial Protection Bureau. CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms. May 26, 2022. https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using-complex-algorithms/
[xi] Anthony Zaller. California Passes Law Regulating Quotas In Warehouses – What Employers Need to Know About AB 701. Zaller Law Group California Employment Law Report. Sept. 24, 2021. https://www.californiaemploymentlawreport.com/2021/09/california-passes-law-regulating-quotas-in-warehouses-what-employers-need-to-know-about-ab-701/
[xii] National Institute of Standards and Technology. AI Fundamental Research – Explainability. Accessed Jun. 4, 2022. https://www.nist.gov/artificial-intelligence/ai-fundamental-research-explainability
[xiii] DARPA. Explainable Artificial Intelligence (XAI). Accessed July 20, 2022. https://www.darpa.mil/program/explainable-artificial-intelligence
[xiv] National Science Foundation. NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon (FAI). Accessed July 20, 2022. https://www.nsf.gov/pubs/2021/nsf21585/nsf21585.htm