Navigate this Section

You should be protected from unsafe or ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems. You should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.

This section provides a brief summary of the problems which the principle seeks to address and protect against, including illustrative examples.

While technologies are being deployed to solve problems across a wide array of issues, our reliance on technology can also lead to its use in situations where it has not yet been proven to work—either at all or within an acceptable range of error. In other cases, technologies do not work as intended or as promised, causing substantial and unjustified harm. Automated systems sometimes rely on data from other systems, including historical data, allowing irrelevant information from past decisions to infect decision-making in unrelated situations.  In some cases, technologies are purposefully designed to violate the safety of others, such as technologies designed to facilitate stalking; in other cases, intended or unintended uses lead to unintended harms.

Many of the harms resulting from these technologies are preventable, and actions are already being taken to protect the public. Some companies have put in place safeguards that have prevented harm from occurring by ensuring that key development decisions are vetted by an ethics review; others have identified and mitigated harms found through pre-deployment testing and ongoing monitoring processes. Governments at all levels have existing public consultation processes that may be applied when considering the use of new automated systems, and existing product development and testing practices already protect the American public from many potential harms.

Still, these kinds of practices are deployed too rarely and unevenly. Expanded, proactive protections could build on these existing practices, increase confidence in the use of automated systems, and protect the American public. Innovators deserve clear rules of the road that allow new ideas to flourish, and the American public deserves protections from unsafe outcomes. All can benefit from assurances that automated systems will be designed, tested, and consistently confirmed to work as intended, and that they will be proactively protected from foreseeable unintended harmful outcomes.

  • A proprietary model was developed to predict the likelihood of sepsis in hospitalized patients and was implemented at hundreds of hospitals around the country. An independent study showed that the model predictions underperformed relative to the designer’s claims while also causing ‘alert fatigue’ by falsely alerting likelihood of sepsis.[i]
  • On social media, Black people who quote and criticize racist messages have had their own speech silenced when a platform’s automated moderation system failed to distinguish this “counter speech” (or other critique and journalism) from the original hateful messages to which such speech responded.[ii]
  • A device originally developed to help people track and find lost items has been used as a tool by stalkers to track victims’ locations in violation of their privacy and safety. The device manufacturer took steps after release to protect people from unwanted tracking by alerting people on their phones when a device is found to be moving with them over time and also by having the device make an occasional noise, but not all phones are able to receive the notification and the devices remain a safety concern due to their misuse.[iii]
  • An algorithm used to deploy police was found to repeatedly send police to neighborhoods they regularly visit, even if those neighborhoods were not the ones with the highest crime rates. These incorrect crime predictions were the result of a feedback loop generated from the reuse of data from previous arrests and algorithm predictions.[iv]
  • AI-enabled “nudification” technology that creates images where people appear to be nude—including apps that enable non-technical users to create or alter images of individuals without their consent—has proliferated at an alarming rate. Such technology is becoming a common form of image-based abuse that disproportionately impacts women. As these tools become more sophisticated, they are producing altered images that are increasingly realistic and are difficult for both humans and AI to detect as inauthentic. Regardless of authenticity, the experience of harm to victims of non-consensual intimate images can be devastatingly real—affecting their personal and professional lives, and impacting their mental and physical health.[v]
  • A company installed AI-powered cameras in its delivery vans in order to evaluate the road safety habits of its drivers, but the system incorrectly penalized drivers when other cars cut them off or when other events beyond their control took place on the road. As a result, drivers were incorrectly ineligible to receive a bonus.[vi]

The expectations for automated systems are meant to serve as a blueprint for the development of additional technical standards and practices that are tailored for particular sectors and contexts.

In order to ensure that an automated system is safe and effective, it should include safeguards to protect the public from harm in a proactive and ongoing manner; avoid use of data inappropriate for or irrelevant to the task at hand, including reuse that could cause compounded harm; and demonstrate the safety and effectiveness of the system. These expectations are explained below.

Protect the public from harm in a proactive and ongoing manner

  • Consultation. The public should be consulted in the design, implementation, deployment, acquisition, and maintenance phases of automated system development, with emphasis on early-stage consultation before a system is introduced or a large change implemented. This consultation should directly engage diverse impacted communities to consider concerns and risks that may be unique to those communities, or disproportionately prevalent or severe for them. The extent of this engagement and the form of outreach to relevant stakeholders may differ depending on the specific automated system and development phase, but should include subject matter, sector-specific, and context-specific experts as well as experts on potential impacts such as civil rights, civil liberties, and privacy experts. For private sector applications, consultations before product launch may need to be confidential. Government applications, particularly law enforcement applications or applications that raise national security considerations, may require confidential or limited engagement based on system sensitivities and preexisting oversight laws and structures. Concerns raised in this consultation should be documented, and the automated system developers were proposing to create, use, or deploy should be reconsidered based on this feedback.
  • Testing. Systems should undergo extensive testing before deployment. This testing should follow domain-specific best practices, when available, for ensuring the technology will work in its real-world context. Such testing should take into account both the specific technology used and the roles of any human operators or reviewers who impact system outcomes or effectiveness; testing should include both automated systems testing and human-led (manual) testing. Testing conditions should mirror as closely as possible the conditions in which the system will be deployed, and new testing may be required for each deployment to account for material differences in conditions from one deployment to another. Following testing, system performance should be compared with the in-place, potentially human-driven, status quo procedures, with existing human performance considered as a performance baseline for the algorithm to meet pre-deployment, and as a lifecycle minimum performance standard. Decision possibilities resulting from performance testing should include the possibility of not deploying the system.
  • Risk identification and mitigation. Before deployment, and in a proactive and ongoing manner, potential risks of the automated system should be identified and mitigated. Identified risks should focus on the potential for meaningful impact on people’s rights, opportunities, or access and include those to impacted communities that may not be direct users of the automated system, risks resulting from purposeful misuse of the system, and other concerns identified via the consultation process. Assessment and, where possible, measurement of the impact of risks should be included and balanced such that high impact risks receive attention and mitigation proportionate with those impacts. Automated systems with the intended purpose of violating the safety of others should not be developed or used; systems with such safety violations as identified unintended consequences should not be used until the risk can be mitigated. Ongoing risk mitigation may necessitate rollback or significant modification to a launched automated system.
  • Ongoing monitoring. Automated systems should have ongoing monitoring procedures, including recalibration procedures, in place to ensure that their performance does not fall below an acceptable level over time, based on changing real-world conditions or deployment contexts, post-deployment modification, or unexpected conditions. This ongoing monitoring should include continuous evaluation of performance metrics and harm assessments, updates of any systems, and retraining of any machine learning models as necessary, as well as ensuring that fallback mechanisms are in place to allow reversion to a previously working system. Monitoring should take into account the performance of both technical system components (the algorithm as well as any hardware components, data inputs, etc.) and human operators. It should include mechanisms for testing the actual accuracy of any predictions or recommendations generated by a system, not just a human operator’s determination of their accuracy. Ongoing monitoring procedures should include manual, human-led monitoring as a check in the event there are shortcomings in automated monitoring systems. These monitoring procedures should be in place for the lifespan of the deployed automated system.
  • Clear organizational oversight. Entities responsible for the development or use of automated systems should lay out clear governance structures and procedures.  This includes clearly-stated governance procedures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing assessment and mitigation. Organizational stakeholders including those with oversight of the business process or operation being automated, as well as other organizational divisions that may be affected due to the use of the system, should be involved in establishing governance procedures. Responsibility should rest high enough in the organization that decisions about resources, mitigation, incident response, and potential rollback can be made promptly, with sufficient weight given to risk mitigation objectives against competing concerns. Those holding this responsibility should be made aware of any use cases with the potential for meaningful impact on people’s rights, opportunities, or access as determined based on risk identification procedures.  In some cases, it may be appropriate for an independent ethics review to be conducted before deployment.

Avoid inappropriate, low-quality, or irrelevant data use and the compounded harm of its reuse

  • Relevant and high-quality data. Data used as part of any automated system’s creation, evaluation, or deployment should be relevant, of high quality, and tailored to the task at hand. Relevancy should be established based on research-backed demonstration of the causal influence of the data to the specific use case or justified more generally based on a reasonable expectation of usefulness in the domain and/or for the system design or ongoing development. Relevance of data should not be established solely by appealing to its historical connection to the outcome. High quality and tailored data should be representative of the task at hand and errors from data entry or other sources should be measured and limited. Any data used as the target of a prediction process should receive particular attention to the quality and validity of the predicted outcome or label to ensure the goal of the automated system is appropriately identified and measured. Additionally, justification should be documented for each data attribute and source to explain why it is appropriate to use that data to inform the results of the automated system and why such use will not violate any applicable laws. In cases of high-dimensional and/or derived attributes, such justifications can be provided as overall descriptions of the attribute generation process and appropriateness.
  • Derived data sources tracked and reviewed carefully. Data that is derived from other data through the use of algorithms, such as data derived or inferred from prior model outputs, should be identified and tracked, e.g., via a specialized type in a data schema. Derived data should be viewed as potentially high-risk inputs that may lead to feedback loops, compounded harm, or inaccurate results. Such sources should be carefully validated against the risk of collateral consequences.
  • Data reuse limits in sensitive domains. Data reuse, and especially data reuse in a new context, can result in the spreading and scaling of harms. Data from some domains, including criminal justice data and data indicating adverse outcomes in domains such as finance, employment, and housing, is especially sensitive, and in some cases its reuse is limited by law. Accordingly, such data should be subject to extra oversight to ensure safety and efficacy.  Data reuse of sensitive domain data in other contexts (e.g., criminal data reuse for civil legal matters or private sector use) should only occur where use of such data is legally authorized and, after examination, has benefits for those impacted by the system that outweigh identified risks and, as appropriate, reasonable measures have been implemented to mitigate the identified risks. Such data should be clearly labeled to identify contexts for limited reuse based on sensitivity. Where possible, aggregated datasets may be useful for replacing individual-level sensitive data.

Demonstrate the safety and effectiveness of the system

  • Independent evaluation. Automated systems should be designed to allow for independent evaluation (e.g., via application programming interfaces). Independent evaluators, such as researchers, journalists, ethics review boards, inspectors general, and third-party auditors, should be given access to the system and samples of associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual property law), in order to perform such evaluations. Mechanisms should be included to ensure that system access for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to provide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot be revoked without reasonable and verified justification.
  • Reporting.[vii]Entities responsible for the development or use of automated systems should provide regularly-updated reports that include: an overview of the system, including how it is embedded in the organization’s business processes or other activities, system goals, any human-run procedures that form a part of the system, and specific performance expectations; a description of any data used to train machine learning models or for other purposes, including how data sources were processed and interpreted, a summary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the results of public consultation such as concerns raised and any decisions made due to these concerns; risk identification and management assessments and any steps taken to mitigate potential harms; the results of performance testing including, but not limited to, accuracy, differential demographic impact, resulting error rates (overall and per demographic group), and comparisons to previously deployed systems; ongoing monitoring procedures and regular performance testing reports, including monitoring frequency, results, and actions taken; and the procedures for and results from independent evaluations. Reporting should be provided in a plain language and machine-readable manner.

Real-life examples of how these principles can become reality, through laws, policies, and practical technical and sociotechnical approaches to protecting rights, opportunities, and access.

  • Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government requires that certain federal agencies adhere to nine principles when designing, developing, acquiring, or using AI for purposes other than national security or defense. These principles—while taking into account the sensitive law enforcement and other contexts in which the federal government may use AI, as opposed to private sector use of AI—require that AI is: (a) lawful and respectful of our Nation’s values; (b) purposeful and performance-driven; (c) accurate, reliable, and effective; (d) safe, secure, and resilient; (e) understandable; (f ) responsible and traceable; (g) regularly monitored; (h) transparent; and, (i) accountable. The Blueprint for an AI Bill of Rights is consistent with the Executive Order. Affected agencies across the federal government have released AI use case inventories [xxv] and are implementing plans to bring those AI systems into compliance with the Executive Order or retire them.
  • The law and policy landscape for motor vehicles shows that strong safety regulationsand measures to address harms when they occurcan enhance innovation in the context of complex technologies. Cars, like automated digital systems, comprise a complex collection of components. The National Highway Traffic Safety Administration,[ix] through its rigorous standards and independent evaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to innovate.[x] At the same time, rules of the road are implemented locally to impose contextually appropriate requirements on drivers, such as slowing down near schools or playgrounds.[xi]
  • From large companies to start-ups, industry is providing innovative solutions that allow organizations to mitigate risks to the safety and efficacy of AI systems, both before deployment and through monitoring over time.[xii] These innovative solutions include risk assessments, auditing mechanisms, assessment of organizational procedures, dashboards to allow for ongoing monitoring, documentation procedures specific to model assessments, and many other strategies that aim to mitigate risks posed by the use of AI to companies’ reputation, legal responsibilities, and other product safety and effectiveness concerns.
  • The Office of Management and Budget (OMB) has called for an expansion of opportunities for meaningful stakeholder engagement in the design of programs and services. OMB also points to numerous examples of effective and proactive stakeholder engagement, including the Community-Based Participatory Research Program developed by the National Institutes of Health and the participatory technology assessments developed by the National Oceanic and Atmospheric Administration.[xiii]
  • The National Institute of Standards and Technology (NIST) is developing a risk management framework to better manage risks posed to individuals, organizations, and society by AI.[xiv] The NIST AI Risk Management Framework, as mandated by Congress, is intended for voluntary use to help incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The NIST framework is being developed through a consensus-driven, open, transparent, and collaborative process that includes workshops and other opportunities to provide input. The NIST framework aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The NIST framework will consider and encompass principles such as transparency, accountability, and fairness during pre-design, design and development, deployment, use, and testing and evaluation of AI technologies and systems. It is expected to be released in the winter of 2022-23.
  • Some U.S government agencies have developed specific frameworks for ethical use of AI systems. The Department of Energy (DOE) has activated the AI Advancement Council that oversees coordination and advises on implementation of the DOE AI Strategy and addresses issues and/or escalations on the ethical use and development of AI systems.[xv] The Department of Defense has adopted Artificial Intelligence Ethical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its national security and defense activities.[xvi] Similarly, the U.S. Intelligence Community (IC) has developed the Principles of Artificial Intelligence Ethics for the Intelligence Community to guide personnel on whether and how to develop and use AI in furtherance of the IC’s mission, as well as an AI Ethics Framework to help implement these principles.[xvii]
  • The National Science Foundation (NSF) funds extensive research to help foster the development of automated systems that adhere to and advance their safety, security and effectiveness.  Multiple NSF programs support research that directly addresses many of these principles: the National AI Research Institutes[xviii] support research on all aspects of safe, trustworthy, fair, and explainable AI algorithms and systems; the Cyber Physical Systems[xix] program supports research on developing safe autonomous and cyber physical systems with AI components; the Secure and Trustworthy Cyberspace[xx] program supports research on cybersecurity and privacy enhancing technologies in automated systems; the Formal Methods in the Field[xxi] program supports research on rigorous formal verification and analysis of automated systems and machine learning, and the Designing Accountable Software Systems[xxii] program supports research on rigorous and reproducible methodologies for developing software systems with legal and regulatory compliance in mind.
  • Some state legislatures have placed strong transparency and validity requirements on the use of pretrial risk assessments. The use of algorithmic pretrial risk assessments has been a cause of concern for civil rights groups.[xxiii] Idaho Code Section 19-1910, enacted in 2019,[xxiv] requires that any pretrial risk assessment, before use in the state, first be “shown to be free of bias against any class of individuals protected from discrimination by state or federal law”, that any locality using a pretrial risk assessment must first formally validate the claim of its being free of bias, that “all documents, records, and information used to build or validate the risk assessment shall be open to public inspection,” and that assertions of trade secrets cannot be used “to quash discovery in a criminal matter by a party to a criminal case.”

[i] Andrew Wong et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med. 2021; 181(8):1065-1070. doi:10.1001/jamainternmed.2021.2626

[ii] Jessica Guynn. Facebook while black: Users call it getting ‘Zucked,’ say talking about racism is censored as hate speech. USA Today. Apr. 24, 2019.

[iii] See, e.g., Michael Levitt. AirTags are being used to track people and cars. Here’s what is being done about it. NPR. Feb. 18, 2022.; Samantha Cole. Police Records Show Women Are Being Stalked With Apple AirTags Across the Country. Motherboard. Apr. 6, 2022.

[iv] Kristian Lum and William Isaac. To Predict and Serve? Significance. Vol. 13, No. 5, p. 14-19. Oct. 7, 2016.; Aaron Sankin, Dhruv Mehrotra, Surya Mattu, and Annie Gilbertson. Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them. The Markup and Gizmodo. Dec. 2, 2021.

[v] Samantha Cole. This Horrifying App Undresses a Photo of Any Woman With a Single Click. Motherboard. June 26, 2019.

[vi] Lauren Kaori Gurley. Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make. Motherboard. Sep. 20, 2021.

[vii] Expectations about reporting are intended for the entity developing or using the automated system. The resulting reports can be provided to the public, regulators, auditors, industry standards groups, or others engaged in independent review, and should be made public as much as possible consistent with law, regulation, and policy, and noting that intellectual property or law enforcement considerations may prevent public release. These reporting expectations are important for transparency, so the American people can have confidence that their rights, opportunities, and access as well as their expectations around technologies are respected.

[xvii] Director of National Intelligence. Principles of Artificial Intelligence Ethics for the Intelligence Community.

[xviii] National Science Foundation. National Artificial Intelligence Research Institutes. Accessed Sept. 12, 2022.

[xix] National Science Foundation. Cyber-Physical Systems. Accessed Sept. 12, 2022.

[xx] National Science Foundation. Secure and Trustworthy Cyberspace. Accessed Sept. 12, 2022.

[xxi] National Science Foundation. Formal Methods in the Field. Accessed Sept. 12, 2022.

[xxii] National Science Foundation. Designing Accountable Software Systems. Accessed Sept. 12, 2022.

[xxiii] The Leadership Conference Education Fund. The Use Of Pretrial “Risk Assessment” Instruments: A Shared Statement Of Civil Rights Concerns. Jul. 30, 2018.;

[xxiv] Idaho Legislature. House Bill 118. Jul. 1, 2019.

[xxv] National Artificial Intelligence Initiative Office. Agency Inventories of AI Use Cases. Accessed Sept. 8, 2022.

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top