As part of her visit to the United Kingdom to deliver a major policy speech on Artificial Intelligence (AI) and attend the Global Summit on AI Safety, Vice President Kamala Harris is announcing a series of new U.S. initiatives to advance the safe and responsible use of AI. These bold actions demonstrate U.S. leadership on AI and build upon the historic Executive Order signed by President Biden on October 30.

Since taking office, President Biden and the Vice President have moved with urgency to seize the promise and manage the risks posed by AI. The Biden-Harris Administration is working with the private sector, other governments, and civil society to uphold the highest standards to ensure that innovation does not come at the expense of the public’s rights and safety.

As part of the Vice President’s global work to strengthen international rules and norms, the Vice President is committed to establishing a set of rules and norms for AI, with allies and partners, that reflect democratic values and interests, including transparency, privacy, accountability, and consumer protections. Her trip to London and participation in the Global Summit on AI Safety will further advance this work.

The Vice President’s trip to the United Kingdom builds on her long record of leadership to confront the challenges and seize the opportunities of advanced technology. In May, she convened the CEOs of companies at the forefront of AI innovation, resulting in voluntary commitments from 15 leading AI companies to help move toward safe, secure, and transparent development of AI technology. In July, the Vice President convened consumer protection, labor, and civil rights leaders to discuss the risks related to AI and to underscore that it is a false choice to suggest America can either advance innovation or protect consumers’ rights.

As part of her visit to the United Kingdom, the Vice President is announcing the following initiatives.

  • The United States AI Safety Institute: The Biden-Harris Administration, through the Department of Commerce, is establishing the United States AI Safety Institute (US AISI) inside NIST. The US AISI will operationalize NIST’s AI Risk Management Framework by creating guidelines, tools, benchmarks, and best practices for evaluating and mitigating dangerous capabilities and conducting evaluations including red-teaming to identify and mitigate AI risk. The Institute will develop technical guidance that will be used by regulators considering rulemaking and enforcement on issues such as authenticating content created by humans, watermarking AI-generated content, identifying and mitigating against harmful algorithmic discrimination, ensuring transparency, and enabling adoption of privacy-preserving AI, and would serve as a driver of the future workforce for safe and trusted AI. It will also enable information-sharing and research collaboration with peer institutions internationally, including the UK’s planned AI Safety Institute (UK AISI), and partner with outside experts from civil society, academia, and industry.
  • Draft Policy Guidance on U.S. Government Use of AI: The Biden-Harris Administration, through the Office of Management and Budget, is releasing for public comment its first-ever draft policy guidance on the use of AI by the U.S. government. This draft policy builds on prior leadership—including the Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology (NIST) AI Risk Management Framework—and outlines concrete steps to advance responsible AI innovation in government, increase transparency and accountability, protect federal workers, and manage risks from sensitive uses of AI. In a wide range of contexts including health, education, employment, federal benefits, law enforcement, immigration, transportation, and critical infrastructure, the draft policy would create specific safeguards for uses of AI that impact the rights and safety of the public. This includes requiring that federal departments and agencies conduct AI impact assessments, identify, monitor, and mitigate AI risks, sufficiently train AI operators, conduct public notice and consultation for the use of AI, and offer options to appeal harms caused by AI. More details on this policy and how to comment can be found at ai.gov/input.
  • Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy: In February, the United States made a Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy. The Vice President is announcing that 31 nations have joined the United States in endorsing this Declaration and is calling on others to join. This Declaration establishes a set of norms for responsible development, deployment, and use of military AI capabilities that can help responsible states around the globe harness the benefits of AI capabilities—including those enabling autonomous functions and systems for their military and defense establishments—in a responsible and lawful manner. These norms include compliance with International Humanitarian Law, properly training personnel, building in critical safeguards, and subjecting capabilities to rigorous testing and legal review.  The Declaration marked the beginning of a crucial dialogue among responsible states regarding the implementation of these foundational principles and practices. As of November 1, countries joining the Declaration include: Albania, Australia, Belgium, Bulgaria, Canada, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Hungary, Iceland, Ireland, Italy, Japan, Kosovo, Latvia, Liberia, Malawi, Montenegro, Morocco, North Macedonia, Portugal, Romania, Singapore, Slovenia, Spain, Sweden, and the United Kingdom. 
  • New Funders Initiative to Advance AI in the Public Interest: Vice President Harris is announcing a bold new initiative with philanthropic organizations related to AI. This includes a vision for philanthropic giving to advance AI that is designed and used in the best interests of workers, consumers, communities, and historically marginalized people in the United States and across the globe. Ten leading foundations are announcing they have collectively committed more than $200 million in funding toward initiatives to advance the priorities laid out by the Vice President, and are forming a funders network to coordinate new philanthropic giving to advance work organized around five pillars: ensuring AI protects democracy and rights, driving AI innovation in the public interest, empowering workers to thrive amid AI-driven changes, improving transparency and accountability of AI, and supporting international rules and norms on AI. The foundations launching this effort are the David and Lucile Packard Foundation; Democracy Fund; the Ford Foundation; Heising-Simons Foundation; the John D. and Catherine T. MacArthur Foundation; Kapor Foundation; Mozilla Foundation; Omidyar Network; Open Society Foundations; and the Wallace Global Fund.

Additional actions:

  • Detecting and Blocking AI driven Fraudulent Phone Calls: The Biden-Harris Administration will launch an effort to counter fraudsters who are using AI generated voice models to target and steal from the most vulnerable in our communities. The White House will host a virtual hackathon, inviting companies to submit teams of technology experts focused on building AI technologies, to come together and build AI models that can detect and block unwanted robocalls and robotexts, particularly those using novel AI-generated voice models which particularly harm the elderly. There are promising paths to develop these algorithms using metadata surrounding the phone call and voice models to detect AI-generated content and terminate a phone call early or warn the receiver while the call is in progress. The Federal Communication Commission is exploring creative ideas focusing on using AI to target AI-driven fraud and robocalls, and recommends continued joint engagement with the UK’s telecom regulator, Ofcom, on protecting consumers from robocalls via AI driven defenses.
  • International Norms on Content Authentication: The Biden-Harris Administration is calling on all nations to support the development and implementation of international standards to enable the public to effectively identify and trace authentic government-produced digital content and AI-generated or manipulated content, including through digital signatures, watermarking, and other labeling techniques. This effort aims to increase global resilience against deceptive or harmful synthetic AI-generated or manipulated media. This call to action builds on the voluntary commitments by 15 leading AI companies to develop mechanisms that enable users to understand if audio or visual content is AI-generated and a U.S. government commitment in the recently-released Executive Order on AI to develop guidelines, tools, and practices for digital content authentication and synthetic content detection measures.
  • Pledge to Incorporate Responsible and Rights-Respecting Practices in Government Development, Procurement, and Use of AI: Building on the principles of the Draft Policy Guidance on the U.S. Government Use of AI, the Biden-Harris Administration, through the State Department, intends to work with the Freedom Online Coalition of 38 countries to develop a pledge to incorporate responsible and rights-respecting practices in government development, procurement, and use of AI. Such a pledge is important to ensure AI systems are developed and used in a manner that is consistent with applicable international law, including international human rights law, and that upholds democratic institutions and processes.

 ###

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top