Home Legislation would require U.S. federal agencies, vendors to follow AI risk management guide
US & World

Legislation would require U.S. federal agencies, vendors to follow AI risk management guide

Rebecca Barnabi
artificial intelligence
(© Kaikoro – stock.adobe.com)

The Federal Artificial Intelligence Risk Management Act is legislation to require U.S. federal agencies and vendors to follow the AI risk management guidelines put forth by the National Institute of Standards and Technology (NIST).

Congressmembers Ted W. Lieu of California, Zach Nunn of Iowa, Don Beyer of Virginia and Marcus Molinaro of New York introduced the bill today. Sens. Jerry Moran of Kansas and Mark R. Warner of Virginia introduced companion legislation in the Senate at the end of 2023. 

Congress directed the NIST to develop an AI Risk Management Framework that organizations, public and private, could employ to ensure they use AI systems in a trustworthy manner. The framework was released in early 2023 and is supported by a wide range of public and private sector organizations, but federal agencies are not required to use the framework to manage AI systems.

The Federal Artificial Intelligence Risk Management Act would require federal agencies and vendors to incorporate the NIST framework into their AI management efforts to help limit the risks associated with AI technology.

“As AI continues to develop rapidly, we need a coordinated government response to ensure the technology is used responsibly and that individuals are protected,” Lieu said. “The AI Risk Management Framework developed by NIST is a great starting point for agencies and vendors to analyze the risks associated with AI and to mitigate those risks. These guidelines have already been used by a number of public and private sector organizations, and there is no reason why they shouldn’t be applied to the federal government as well. I’m grateful to my House and Senate colleagues from both sides of the aisle for their partnership in this effort to promote safe AI use within the federal government and to allow the United States to continue to lead on AI.”

Beyer said that safeguards against AI’s risks will become “increasingly important” as the federal government expands use of AI.

“Our bill, which would require the federal government to put into practice the excellent risk mitigation and AI safety frameworks developed by NIST, is a natural starting point. By ensuring that federal agencies have the necessary tools to navigate the complexities of AI, we can ensure both the trustworthiness and effectiveness of AI systems used by the government and encourage other organizations and companies to adopt similar standards. This bill lays the foundation for harnessing the power of AI for the benefit of the American people, while upholding the highest standards of accountability and transparency,” Beyer said.

Nunn said technological advancement is good for society and government can make it more effective.

“As the federal government implements AI toward this end, we must ensure that Americans’ data is safe and the government is transparent about what it is doing. This bipartisan bill will ensure we’re doing everything we can to protect the American people while leveraging the full capabilities of new technology,” Nunn said.

According to Molinaro, lawmakers “have to recognize it is here and being widely utilized. Congress must provide guidance to operate AI safely and close cybersecurity gaps. Congress took an important step forward by directing NIST to develop an AI Risk Management Framework. Our bipartisan bill will require federal agencies to adopt this framework to help unleash the potential of AI, while keeping federal assets secure.”

Moran said AI has potential to improve efficiency and effectiveness of the federal government, and have positive impacts on the private sector.

“However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data. The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions,” Moran said.

Warner is chair of the Senate Select Committee on Intelligence and a former technology entrepreneur.

“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries. But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order to capitalize on the benefits while mitigating risks,” Warner said.

Fred Humphries, Corporate Vice President, U.S. Government Affairs at Microsoft said the company looks “forward to working with Reps. Lieu, Nunn, Beyer, and Molinaro as they advance this framework.”

Rebecca Barnabi

Rebecca Barnabi

Rebecca J. Barnabi is the national editor of Augusta Free Press. A graduate of the University of Mary Washington, she began her journalism career at The Fredericksburg Free-Lance Star. In 2013, she was awarded first place for feature writing in the Maryland, Delaware, District of Columbia Awards Program, and was honored by the Virginia School Boards Association’s 2019 Media Honor Roll Program for her coverage of Waynesboro Schools. Her background in newspapers includes writing about features, local government, education and the arts.