The NAIRR Task Force deliberated on the findings and recommendations for its forthcoming interim report that will provide a vision for the NAIRR; discussed plans for public engagement and development of a final implementation plan for the NAIRR; and considered options for tools and processes to reinforce responsible artificial intelligence (AI) research through the NAIRR

On April 8, the 12 members of the National AI Research Resource Task Force met virtually in their sixth public meeting, continuing their efforts to develop a vision and implementation plan for a NAIRR — a national cyberinfrastructure that would democratize access to the resources and tools that fuel AI research and development (R&D). Expansion of access to the tools and infrastructure necessary to conduct AI R&D would broaden the range of researchers involved in AI, grow and diversify approaches to and applications of AI, and open opportunities to advance R&D in AI and across related scientific fields and disciplines, including in critical areas such as AI auditing, testing and evaluation, bias mitigation, security, and more.

Task Force co-chairs Dr. Lynne Parker, Director of the National AI Initiative Office within the White House Office of Science and Technology Policy (OSTP), and Dr. Manish Parashar, Office Director for the Office of Advanced Cyberinfrastructure at the National Science Foundation (NSF), led the Task Force in a discussion on a draft of its interim report that will be submitted to the President and Congress in May 2022. This report will provide the Task Force’s general vision for a NAIRR along with a preliminary set of findings and recommendations for the design of the NAIRR architecture, resources, capabilities, and uses.

Looking ahead, the Task Force members deliberated on the key additional questions to be answered in developing a final report, anticipated for release in December 2022, which will provide a more detailed roadmap and implementation plan for realizing the NAIRR vision. Discussion focused on the need to define the scale and unique value-add of the NAIRR, identify any gaps in the Task Force’s analysis to date, and understand the role of partnerships. The Task Force also discussed plans to solicit feedback from a range of external stakeholders, including the general public, on the interim report and on the development of a plan to implement the NAIRR vision. The Task Force agreed that this engagement will include a request for information and a public listening session, as well as interagency and international roundtables.

A panel of outside experts provided insights to the Task Force on practical tools and processes that the Task Force could include to meet requirements for promoting responsible AI research and protecting privacy, civil rights, and civil liberties. The panelists shared ideas on embedding ethical checks throughout the NAIRR governance process, requiring algorithmic impact statements, dataset stewardship, setting up ethics and society review boards, and other exemplar practices.

The Task Force members answered questions posed by the public attendees throughout the meeting, addressing public-private partnerships, data access, intellectual property, and other issues in the context of the envisioned NAIRR design.

The Task Force will hold its seventh public meeting in May 2022. Details on how to participate in future Task Force meetings will be available on AI.gov/nairrtf, along with materials from this and prior meetings.

External Speakers at the April 8 Meeting:

  • Beena Ammanath, Global Deloitte AI Institute
  • Michael Bernstein, Stanford University
  • Arvind Narayanan, Princeton University
  • Beth Plale, Indiana University Bloomington
  • Christo Wilson, Northeastern University

###

Stay Connected

Sign Up

We'll be in touch with the latest information on how President Biden and his administration are working for the American people, as well as ways you can get involved and help our country build back better.

Opt in to send and receive text messages from President Biden.

Scroll to Top Scroll to Top
Top