The President’s Council of Advisors on Science and Technology (PCAST) has launched a working group on generative artificial intelligence (AI) to help assess key opportunities and risks and provide input on how best to ensure that these technologies are developed and deployed as equitably, responsibly, and safely as possible. 

Generative AI refers to a class of AI systems that, after being trained on large data sets, can be used to generate text, images, videos or other outputs from a given prompt.  These technologies are advancing rapidly and have the potential to revolutionize many aspects of modern life. In the sciences, these tools are being used to design new drugs, proteins, or materials, and promise to accelerate the pace of discovery. In medicine, generative AI has the potential to provide advice to healthcare professionals. In the workplace, these tools speed up the writing of computer code, help with composing presentations, and perform summarization.  

However, generative AI models can also be used for malicious purposes, such as creating disinformation, driving misinformation campaigns, and impersonating individuals. When used without safeguards, generative AI can stoke polarization, exacerbate biases and inequities in society, and, more generally, threaten democracy by making it difficult for citizens to understand events in the world. Further, generative AI systems can violate privacy and undermine intellectual property rights. As with many advances in science and technology, a balance should be found between encouraging innovation and pursuing beneficial applications of the technology, and identifying and mitigating potential harms.

U.S. government agencies are actively helping to achieve this balance. For instance, the White House Blueprint for an AI Bill of Rights lays out core aspirational principles to guide the responsible design and deployment of AI technologies. The National Institute of Standards and Technology (NIST) released the AI Risk Management Framework to help organizations and individuals characterize and manage the potential risks of AI technologies. Congress created the National Security Commission on AI, which studied opportunities and risks ahead and the importance of guiding the development of AI in accordance with American values around democracy and civil liberties. The National Artificial Intelligence Initiative was launched to ensure U.S. leadership in the responsible development and deployment of trustworthy AI and support coordination of U.S. research, development, and demonstration of AI technologies across the Federal government. In January 2023, the Congressionally mandated National AI Research Resource (NAIRR) Task Force released an implementation plan for providing computational, data, testbed, and software resources to AI researchers affiliated with U.S organizations.

The PCAST Working Group on Generative AI aims to build upon these existing efforts by identifying additional needs and opportunities and making recommendations to the President for how best to address them. Over the course of the year, PCAST will be consulting with experts from all sectors, beginning with panel discussions at our next public meeting on May 19, 2023. We also welcome input from the public on the challenges and opportunities that should be considered, along with potential solutions, for the benefit of the Nation.

PCAST Public Session on Generative AI
We invite you to watch the public PCAST meeting on May 19. This meeting will include two expert panel discussions on the topics of 1) AI Enabling Science and 2) AI Impacts on Society. This meeting will be livestreamed and accessible via the PCAST website.

PCAST Invites Input from the Public on Generative AI
We also invite written submissions from the public on how to identify and promote the beneficial deployment of generative AI, and on how best to mitigate risks. Submissions should be no more than 5 pages in length, provide actionable ideas, and not include proprietary information or any information inappropriate for public disclosure.

Please send your ideas by August 1, 2023 to pcast@ostp.eop.gov with “Generative AI” in the subject line.  We especially welcome comments addressing the following questions (please indicate in your submission which questions you are addressing):

  1. In an era in which convincing images, audio, and text can be generated with ease on a massive scale, how can we ensure reliable access to verifiable, trustworthy information?  How can we be certain that a particular piece of media is genuinely from the claimed source?
  2. How can we best deal with the use of AI by malicious actors to manipulate the beliefs and understanding of citizens?
  3. What technologies, policies, and infrastructure can be developed to detect and counter AI-generated disinformation?
  4. How can we ensure that the engagement of the public with elected representatives—a cornerstone of democracy—is not drowned out by AI-generated noise?
  5. How can we help everyone, including our scientific, political, industrial, and educational leaders, develop the skills needed to identify AI-generated misinformation, impersonation, and manipulation?

Unfortunately, we cannot commit to corresponding on all submissions, but we may invite contributors to present their ideas to the working group as part of our evolving process to develop recommendations.

We also encourage submissions to three formal federal agency requests for information / comments related to AI:

Thank you in advance for your ideas. In the months to come we may seek further public input on the topic of generative AI. 

PCAST Working Group on Generative AI
Laura Greene, Co-Lead
Terence Tao, Co-Lead
Bill Dally
Eric Horvitz
Jon Levin
Saul Perlmutter
Bill Press
Lisa Su
Phil Venables

PCAST Member Bios

###

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top