You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.

This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.

There is extensive evidence showing that automated systems can produce inequitable outcomes and amplify existing inequity.[i] Data that fails to account for existing systemic biases in American society can result in a range of consequences. For example, facial recognition technology that can contribute to wrongful and discriminatory arrests,[ii] hiring algorithms that inform discriminatory decisions, and healthcare algorithms that discount the severity of certain diseases in Black Americans. Instances of discriminatory practices built into and resulting from AI and other automated systems exist across many industries, areas, and contexts. While automated systems have the capacity to drive extraordinary advances and innovations, algorithmic discrimination protections should be built into their design, deployment, and ongoing use.

Many companies, non-profits, and federal government agencies are already taking steps to ensure the public is protected from algorithmic discrimination. Some companies have instituted bias testing as part of their product quality assessment and launch procedures, and in some cases this testing has led products to be changed or not launched, preventing harm to the public. Federal government agencies have been developing standards and guidance for the use of automated systems in order to help prevent bias. Non-profits and companies have developed best practices for audits and impact assessments to help identify potential algorithmic discrimination and provide transparency to the public in the mitigation of such biases.

But there is much more work to do to protect the public from algorithmic discrimination and to use and design automated systems in an equitable way. The guardrails protecting the public from discrimination in their daily lives should include their digital lives and impacts—basic safeguards against abuse, bias, and discrimination to ensure that all people are treated fairly when automated systems are used. This includes all dimensions of their lives, from hiring to loan approvals, from medical treatment and payment to encounters with the criminal justice system. Ensuring equity should also go beyond existing guardrails to consider the holistic impact that automated systems make on underserved communities and to institute proactive protections that support these communities.

  • An automated system using nontraditional factors such as educational attainment and employment history as part of its loan underwriting and pricing model was found to be much more likely to charge an applicant who attended a Historically Black College or University (HBCU) higher loan prices for refinancing a student loan than an applicant who did not attend an HBCU. This was found to be true even when controlling for other credit-related factors.[iii]
  • A hiring tool that learned the features of a company’s employees (predominantly men) rejected women applicants for spurious and discriminatory reasons; resumes with the word “women’s,” such as “women’s chess club captain,” were penalized in the candidate ranking.[iv]
  • A predictive model marketed as being able to predict whether students are likely to drop out of school was used by more than 500 universities across the country. The model was found to use race directly as a predictor, and also shown to have large disparities by race; Black students were as many as four times as likely as their otherwise similar white peers to be deemed at high risk of dropping out. These risk scores are used by advisors to guide students towards or away from majors, and some worry that they are being used to guide Black students away from math and science subjects.[v]
  • A risk assessment tool designed to predict the risk of recidivism for individuals in federal custody showed evidence of disparity in prediction. The tool overpredicts the risk of recidivism for some groups of color on the general recidivism tools, and underpredicts the risk of recidivism for some groups of color on some of the violent recidivism tools. The Department of Justice is working to reduce these disparities and has publicly released a report detailing its review of the tool.[vi]
  • An automated sentiment analyzer, a tool often used by technology platforms to determine whether a statement posted online expresses a positive or negative sentiment, was found to be biased against Jews and gay people. For example, the analyzer marked the statement “I’m a Jew” as representing a negative sentiment, while “I’m a Christian” was identified as expressing a positive sentiment.[vii] This could lead to the preemptive blocking of social media comments such as: “I’m gay.” A related company with this bias concern has made their data public to encourage researchers to help address the issue[viii] and has released reports identifying and measuring this problem as well as detailing attempts to address it.[ix]
  • Searches for “Black girls,” “Asian girls,” or “Latina girls” return predominantly[x] sexualized content, rather than role models, toys, or activities.[xi] Some search engines have been working to reduce the prevalence of these results, but the problem remains.[xii]
  • Advertisement delivery systems that predict who is most likely to click on a job advertisement end up delivering ads in ways that reinforce racial and gender stereotypes, such as overwhelmingly directing supermarket cashier ads to women and jobs with taxi companies to primarily Black people.[xiii]
  • Body scanners, used by TSA at airport checkpoints, require the operator to select a “male” or “female” scanning setting based on the passenger’s sex, but the setting is chosen based on the operator’s perception of the passenger’s gender identity. These scanners are more likely to flag transgender travelers as requiring extra screening done by a person. Transgender travelers have described degrading experiences associated with these extra screenings.[xiv] TSA has recently announced plans to implement a gender-neutral algorithm while simultaneously enhancing the security effectiveness capabilities of the existing technology.[xv]
  • The National Disabled Law Students Association expressed concerns that individuals with disabilities were more likely to be flagged as potentially suspicious by remote proctoring AI systems because of their disability-specific access needs such as needing longer breaks or using screen readers or dictation software.[xvi]
  • An algorithm designed to identify patients with high needs for healthcare systematically assigned lower scores (indicating that they were not as high need) to Black patients than to those of white patients, even when those patients had similar numbers of chronic conditions and other markers of health.[xvii] In addition, healthcare clinical algorithms that are used by physicians to guide clinical decisions may include sociodemographic variables that adjust or “correct” the algorithm’s output on the basis of a patient’s race or ethnicity, which can lead to race-based health inequities.[xviii]

The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.

Any automated system should be tested to help ensure it is free from algorithmic discrimination before it can be sold or used. Protection against algorithmic discrimination should include designing to ensure equity, broadly construed.  Some algorithmic discrimination is already prohibited under existing anti-discrimination law. The expectations set out below describe proactive technical and policy steps that can be taken to not only reinforce those legal protections but extend beyond them to ensure equity for underserved communities[xix] even in circumstances where a specific legal protection may not be clearly established. These protections should be instituted throughout the design, development, and deployment process and are described below roughly in the order in which they would be instituted.

Protect the public from algorithmic discrimination in a proactive and ongoing manner

  • Proactive assessment of equity in design. Those responsible for the development, use, or oversight of automated systems should conduct proactive equity assessments in the design phase of the technology research and development or during its acquisition to review potential input data, associated historical context, accessibility for people with disabilities, and societal goals to identify potential discrimination and effects on equity resulting from the introduction of the technology. The assessed groups should be as inclusive as possible of the underserved communities mentioned in the equity definition:  Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of religious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality. Assessment could include both qualitative and quantitative evaluations of the system. This equity assessment should also be considered a core part of the goals of the consultation conducted as part of the safety and efficacy review.
  • Representative and robust data. Any data used as part of system development or assessment should be representative of local communities based on the planned deployment setting and should be reviewed for bias based on the historical and societal context of the data. Such data should be sufficiently robust to identify and help to mitigate biases and potential harms.
  • Guarding against proxies.  Directly using demographic information in the design, development, or deployment of an automated system (for purposes other than evaluating a system for discrimination or using a system to counter discrimination) runs a high risk of leading to algorithmic discrimination and should be avoided. In many cases, attributes that are highly correlated with demographic features, known as proxies, can contribute to algorithmic discrimination. In cases where use of the demographic features themselves would lead to illegal algorithmic discrimination, reliance on such proxies in decision-making (such as that facilitated by an algorithm) may also be prohibited by law. Proactive testing should be performed to identify proxies by testing for correlation between demographic information and attributes in any data used as part of system design, development, or use. If a proxy is identified, designers, developers, and deployers should remove the proxy; if needed, it may be possible to identify alternative attributes that can be used instead. At a minimum, organizations should ensure a proxy feature is not given undue weight and should monitor the system closely for any resulting algorithmic discrimination.  
  • Ensuring accessibility during design, development, and deployment. Systems should be designed, developed, and deployed by organizations in ways that ensure accessibility to people with disabilities. This should include consideration of a wide variety of disabilities, adherence to relevant accessibility standards, and user experience research both before and after deployment to identify and address any accessibility barriers to the use or effectiveness of the automated system.
  • Disparity assessment. Automated systems should be tested using a broad set of measures to assess whether the system components, both in pre-deployment testing and in-context deployment, produce disparities. The demographics of the assessed groups should be as inclusive as possible of race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. The broad set of measures assessed should include demographic performance measures, overall and subgroup parity assessment, and calibration. Demographic data collected for disparity assessment should be separated from data used for the automated system and privacy protections should be instituted; in some cases it may make sense to perform such assessment using a data sample. For every instance where the deployed automated system leads to different treatment or impacts disfavoring the identified groups, the entity governing, implementing, or using the system should document the disparity and a justification for any continued use of the system.
  • Disparity mitigation. When a disparity assessment identifies a disparity against an assessed group, it may be appropriate to take steps to mitigate or eliminate the disparity. In some cases, mitigation or elimination of the disparity may be required by law.  Disparities that have the potential to lead to algorithmic discrimination, cause meaningful harm, or violate equity[xx] goals should be mitigated. When designing and evaluating an automated system, steps should be taken to evaluate multiple models and select the one that has the least adverse impact, modify data input choices, or otherwise identify a system with fewer disparities. If adequate mitigation of the disparity is not possible, then the use of the automated system should be reconsidered. One of the considerations in whether to use the system should be the validity of any target measure; unobservable targets may result in the inappropriate use of proxies. Meeting these standards may require instituting mitigation procedures and other protective measures to address algorithmic discrimination, avoid meaningful harm, and achieve equity goals.
  • Ongoing monitoring and mitigation. Automated systems should be regularly monitored to assess algorithmic discrimination that might arise from unforeseen interactions of the system with inequities not accounted for during the pre-deployment testing, changes to the system after deployment, or changes to the context of use or associated data. Monitoring and disparity assessment should be performed by the entity deploying or using the automated system to examine whether the system has led to algorithmic discrimination when deployed. This assessment should be performed regularly and whenever a pattern of unusual results is occurring. It can be performed using a variety of approaches, taking into account whether and how demographic information of impacted people is available, for example via testing with a sample of users or via qualitative user experience research. Riskier and higher-impact systems should be monitored and assessed more frequently. Outcomes of this assessment should include additional disparity mitigation, if needed, or fallback to earlier procedures in the case that equity standards are no longer met and can’t be mitigated, and prior mechanisms provide better adherence to equity standards.

Demonstrate that the system protects against algorithmic discrimination

  • Independent evaluation. As described in the section on Safe and Effective Systems, entities should allow independent evaluation of potential algorithmic discrimination caused by automated systems they use or oversee. In the case of public sector uses, these independent evaluations should be made public unless law enforcement or national security restrictions prevent doing so. Care should be taken to balance individual privacy with evaluation data access needs; in many cases, policy-based and/or technological innovations and controls allow access to such data without compromising privacy.
  • Reporting. Entities responsible for the development or use of automated systems should provide reporting of an appropriately designed algorithmic impact assessment,[xxi] with clear specification of who performs the assessment, who evaluates the system, and how corrective actions are taken (if necessary) in response to the assessment. This algorithmic impact assessment should include at least: the results of any consultation, design stage equity assessments (potentially including qualitative analysis), accessibility designs and testing, disparity testing, document any remaining disparities, and detail any mitigation implementation and assessments. This algorithmic impact assessment should be made public whenever possible. Reporting should be provided in a clear and machine-readable manner using plain language to allow for more straightforward public accountability.

Real-life examples of how these principles can become reality, through laws, policies, and practical technical and sociotechnical approaches to protecting rights, opportunities, and access.

  • The federal government is working to combat discrimination in mortgage lending. The Department of Justice has launched a nationwide initiative to combat redlining, which includes reviewing how lenders who may be avoiding serving communities of color are conducting targeted marketing and advertising.[xxii] This initiative will draw upon strong partnerships across federal agencies, including the Consumer Financial Protection Bureau and prudential regulators. The Action Plan to Advance Property Appraisal and Valuation Equity includes a commitment from the agencies that oversee mortgage lending to include a nondiscrimination standard in the proposed rules for Automated Valuation Models.[xxiii]
  • The Equal Employment Opportunity Commission and the Department of Justice have clearly laid out how employers’ use of AI and other automated systems can result in discrimination against job applicants and employees with disabilities.[xxiv] The documents explain how employers’ use of software that relies on algorithmic decision-making may violate existing requirements under Title I of the Americans with Disabilities Act (“ADA”). This technical assistance also provides practical tips to employers on how to comply with the ADA, and to job applicants and employees who think that their rights may have been violated.
  • Disparity assessments identified harms to Black patients’ healthcare access. A widely used healthcare algorithm relied on the cost of each patient’s past medical care to predict future medical needs, recommending early interventions for the patients deemed most at risk. This process discriminated against Black patients, who generally have less access to medical care and therefore have generated less cost than white patients with similar illness and need. A landmark study documented this pattern and proposed practical ways that were shown to reduce this bias, such as focusing specifically on active chronic health conditions or avoidable future costs related to emergency visits and hospitalization.[xxv]
  • Large employers have developed best practices to scrutinize the data and models used for hiring. An industry initiative has developed Algorithmic Bias Safeguards for the Workforce, a structured questionnaire that businesses can use proactively when procuring software to evaluate workers. It covers specific technical questions such as the training data used, model training process, biases identified, and mitigation steps employed.[xxvi]
  • Standards organizations have developed guidelines to incorporate accessibility criteria into technology design processes. The most prevalent in the United States is the Access Board’s Section 508 regulations,[xxvii] which are the technical standards for federal information communication technology (software, hardware, and web). Other standards include those issued by the International Organization for Standardization,[xxviii] and the World Wide Web Consortium  Web Content Accessibility Guidelines,[xxix] a globally recognized voluntary consensus standard for web content and other information and communications technology.
  • NIST has released Special Publication 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.[xxx] The special publication: describes the stakes and challenges of bias in artificial intelligence and provides examples of how and why it can chip away at public trust; identifies three categories of bias in AI – systemic, statistical, and human – and describes how and where they contribute to harms; and describes three broad challenges for mitigating bias – datasets, testing and evaluation, and human factors – and introduces preliminary guidance for addressing them. Throughout, the special publication takes a socio-technical perspective to identifying and managing AI bias.

[i] See, e.g., Executive Office of the President. Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. May, 2016. https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf; Cathy O’Neil. Weapons of Math Destruction. Penguin Books. 2017. https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction; Ruha Benjamin. Race After Technology: Abolitionist Tools for the New Jim Code. Polity. 2019. https://www.ruhabenjamin.com/race-after-technology    

[ii] See, e.g., Kashmir Hill. Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match: A New Jersey man was accused of shoplifting and trying to hit an officer with a car. He is the third known Black man to be wrongfully arrested based on face recognition. New York Times. Dec. 29, 2020, updated Jan. 6, 2021. https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html; Khari Johnson. How Wrongful Arrests Based on AI Derailed 3 Men’s Lives. Wired. Mar. 7, 2022. https://www.wired.com/story/wrongful-arrests-ai-derailed-3-mens-lives/

[iii] Student Borrower Protection Center. Educational Redlining. Student Borrower Protection Center Report. Feb. 2020. https://protectborrowers.org/wp-content/uploads/2020/02/Education-Redlining-Report.pdf

[iv] Jeffrey Dastin. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Oct. 10, 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

[v] Todd Feathers. Major Universities Are Using Race as a “High Impact Predictor” of Student Success: Students, professors, and education experts worry that that’s pushing Black students in particular out of math and science. The Markup. Mar. 2, 2021. https://themarkup.org/machine-learning/2021/03/02/major-universities-are-using-race-as-a-high-impact-predictor-of-student-success

[vi] Carrie Johnson. Flaws plague a tool meant to help low-risk federal prisoners win early release. NPR. Jan. 26, 2022. https://www.npr.org/2022/01/26/1075509175/flaws-plague-a-tool-meant-to-help-low-risk-federal-prisoners-win-early-release.; Carrie Johnson. Justice Department works to curb racial bias in deciding who’s released from prison. NPR. Apr. 19, 2022. https://www.npr.org/2022/04/19/1093538706/justice-department-works-to-curb-racial-bias-in-deciding-whos-released-from-pris; National Institute of Justice. 2021 Review and Revalidation of the First Step Act Risk Assessment Tool. National Institute of Justice NCJ 303859. Dec., 2021. https://www.ojp.gov/pdffiles1/nij/303859.pdf 

[vii] Andrew Thompson. Google’s Sentiment Analyzer Thinks Being Gay Is Bad. Vice. Oct. 25, 2017. https://www.vice.com/en/article/j5jmj8/google-artificial-intelligence-bias

[viii] Kaggle. Jigsaw Unintended Bias in Toxicity Classification: Detect toxicity across a diverse range of conversations. 2019. https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification

[ix] Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. Measuring and Mitigating Unintended Bias in Text Classification. Proceedings of AAAI/ACM Conference on AI, Ethics, and Society. Feb. 2-3, 2018. https://dl.acm.org/doi/pdf/10.1145/3278721.3278729

[x] Paresh Dave. Google cuts racy results by 30% for searches like ‘Latina teenager’. Reuters. Mar. 30, 2022. https://www.reuters.com/technology/google-cuts-racy-results-by-30-searches-like-latina-teenager-2022-03-30/

[xi] Safiya Umoja Noble. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press. Feb. 2018. https://nyupress.org/9781479837243/algorithms-of-oppression/

[xii] Paresh Dave. Google cuts racy results by 30% for searches like ‘Latina teenager’. Reuters. Mar. 30, 2022. https://www.reuters.com/technology/google-cuts-racy-results-by-30-searches-like-latina-teenager-2022-03-30/

[xiii] Miranda Bogen. All the Ways Hiring Algorithms Can Introduce Bias. Harvard Business Review. May 6, 2019. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias

[xiv] Arli Christian. Four Ways the TSA Is Making Flying Easier for Transgender People. American Civil Liberties Union. Apr. 5, 2022. https://www.aclu.org/news/lgbtq-rights/four-ways-the-tsa-is-making-flying-easier-for-transgender-people

[xv] U.S. Transportation Security Administration. Transgender/ Non Binary / Gender Nonconforming Passengers. TSA. Accessed Apr. 21, 2022. https://www.tsa.gov/transgender-passengers

[xvi] See, e.g., National Disabled Law Students Association. Report on Concerns Regarding Online Administration of Bar Exams. Jul. 29, 2020. https://ndlsa.org/wp-content/uploads/2020/08/NDLSA_Online-Exam-Concerns-Report1.pdf; Lydia X. Z. Brown. How Automated Test Proctoring Software Discriminates Against Disabled Students. Center for Democracy and Technology. Nov. 16, 2020. https://cdt.org/insights/how-automated-test-proctoring-software-discriminates-against-disabled-students/

[xvii] Ziad Obermeyer, et al., Dissecting racial bias in an algorithm used to manage the health of populations, 366 Science (2019), https://www.science.org/doi/10.1126/science.aax2342.

[xviii] Darshali A. Vyas et al., Hidden in Plain Sight – Reconsidering the Use of Race Correction in Clinical Algorithms, 383 N. Engl. J. Med.874, 876-78 (Aug. 27, 2020), https://www.nejm.org/doi/full/10.1056/NEJMms2004740.

[xix] The definitions of ‘equity’ and ‘underserved communities’ can be found in the Definitions section of this framework as well as in Section 2 of The Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/

[xx] Id.

[xxi] Various organizations have offered proposals for how such assessments might be designed. See, e.g., Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf. Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. Data & Society Research Institute Report. June 29, 2021. https://datasociety.net/library/assembling-accountability-algorithmic-impact-assessment-for-the-public-interest/; Nicol Turner Lee, Paul Resnick, and Genie Barton. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings Report. May 22, 2019. https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/; Andrew D. Selbst. An Institutional View Of Algorithmic Impact Assessments. Harvard Journal of Law & Technology. June 15, 2021. https://ssrn.com/abstract=3867634; Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker. Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. AI Now Institute Report. April 2018. https://ainowinstitute.org/aiareport2018.pdf

[xxii] Department of Justice. Justice Department Announces New Initiative to Combat Redlining. Oct. 22, 2021. https://www.justice.gov/opa/pr/justice-department-announces-new-initiative-combat-redlining

[xxiii] PAVE Interagency Task Force on Property Appraisal and Valuation Equity. Action Plan to Advance Property Appraisal and Valuation Equity: Closing the Racial Wealth Gap by Addressing Mis-valuations for Families and Communities of Color. March 2022. https://pave.hud.gov/sites/pave.hud.gov/files/documents/PAVEActionPlan.pdf

[xxiv] U.S. Equal Employment Opportunity Commission. The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. EEOC-NVTA-2022-2. May 12, 2022. https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence; U.S. Department of Justice. Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring. May 12, 2022. https://beta.ada.gov/resources/ai-guidance/ 

[xxv] Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in an algorithm used to manage the health of populations. Science. Vol. 366, No. 6464. Oct. 25, 2019. https://www.science.org/doi/10.1126/science.aax2342

[xxvi] Data & Trust Alliance. Algorithmic Bias Safeguards for Workforce: Overview. Jan. 2022. https://dataandtrustalliance.org/Algorithmic_Bias_Safeguards_for_Workforce_Overview.pdf

[xxvii] Section 508.gov. IT Accessibility Laws and Policies. Access Board. https://www.section508.gov/manage/laws-and-policies/

[xxviii] ISO Technical Management Board. ISO/IEC Guide 71:2014. Guide for addressing accessibility in standards. International Standards Organization. 2021. https://www.iso.org/standard/57385.html

[xxix] World Wide Web Consortium. Web Content Accessibility Guidelines (WCAG) 2.0. Dec. 11, 2008. https://www.w3.org/TR/WCAG20/

[xxx] Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, and Andrew Bert. NIST Special Publication 1270: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. The National Institute of Standards and Technology. March, 2022. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top