Bipartisan legislation authored by U.S. Senators Mike Braun and Gary Peters to create an artificial intelligence (AI) training program for federal supervisors and management officials has advanced in the Senate. The training program would help improve the federal workforce’s understanding of AI applications, and ensure that leaders who oversee the use of these tools understand AI’s potential benefits and risks.
The bill was advanced by the Senate Homeland Security and Governmental Affairs Committee where Peters serves as Chair, and now moves to the full Senate for consideration.
“In the past couple of years, we have seen unprecedented development and adoption of AI across industries. We must ensure that government leaders are trained to keep up with the advancements in AI and recognize the benefits and risks of this tool.” — Sen. Braun
“Artificial intelligence has the potential to make the federal government more efficient, but only if government leadership is properly trained to ensure this technology benefits the American people. My bipartisan legislation will ensure supervisors and management officials have the resources to make informed decisions regarding AI technology and its use in the federal government.” — Sen. Peters
Use of artificial intelligence is widespread across government agencies, and the AI Leadership Training Act would provide guidance to federal leaders when making decisions regarding AI technology, and ensure the risks and rewards are properly weighed to best benefit agency missions and American communities. Organizations like the National Security Commission on Artificial Intelligence (NSCAI) and the National AI Advisory Committee (NAIAC) have recommended additional AI training for federal workforce to ensure the appropriate use of these tools.
This bipartisan legislation would require the Director of the Office of Personnel Management (OPM) to provide and regularly update an AI training program for federal government supervisors and management officials. The training aims to help federal leaders understand the capabilities, risks, and ethical implications associated with AI, so they can better determine whether an AI capability is appropriate to meet their mission requirements.