The National Institute of Standards and Technology (NIST) was directed by Congress to develop an Artificial Intelligence (AI) Risk Management Framework.
The framework is intended for organizations, public and private, to employ to ensure they use AI systems in a trustworthy manner. In early 2023, the framework was released and is supported by a wide range of public and private sector organizations, but federal agencies are not currently required to use this framework to manage their use of AI systems.
The Federal Artificial Intelligence Risk Management Act would require federal agencies to incorporate the NIST framework into their AI management efforts to help limit the risks that could be associated with AI technology.
U.S. Sens. Jerry Moran of Kansas and Mark R. Warner of Virginia introduced legislation yesterday to establish guidelines to be used within the federal government to mitigate risks associated with AI while still benefiting from new technology.
“AI has tremendous potential to improve the efficiency and effectiveness of the federal government, in addition to the potential positive impacts on the private sector,” Moran said. “However, it would be naïve to ignore the risks that accompany this emerging technology, including risks related to data privacy and challenges verifying AI-generated data. The sensible guidelines established by NIST are already being utilized in the private sector and should be applied to federal agencies to make certain we are protecting the American people as we apply this technology to government functions.”
Warner, chair of the Senate Select Committee on Intelligence, formerly worked in the technology industry.
“The rapid development of AI has shown that it is an incredible tool that can boost innovation across industries,” Warner said. “But we have also seen the importance of establishing strong governance, including ensuring that any AI deployed is fit for purpose, subject to extensive testing and evaluation, and monitored across its lifecycle to ensure that it is operating properly. It’s crucial that the federal government follow the reasonable guidelines already outlined by NIST when dealing with AI in order capitalize on the benefits while mitigating risks.”
U.S. Rep. Ted W. Lieu of California plans to introduce companion legislation in the U.S. House.
“Procurement of AI systems is challenging because AI evaluation is a complex topic and expertise is often lacking in government.” said Dr. Arvind Narayanan, Professor of Computer Science, Princeton University. “It is also high-stakes because AI is used for making consequential decisions. The Federal Artificial Intelligence Risk Management Act tackles this important problem with a timely and comprehensive approach to revamping procurement by shoring up expertise, evaluation capabilities, and risk management.”
Enterprise Cloud Coalition Executive Director Andrew Howell said the company supports the legislation and adoption of the framework.
“By standardizing risk management practices, this act ensures a higher degree of reliability and security in AI technologies used within our government, aligning with our coalition’s commitment to trust in technology. We believe this legislation is a critical step toward advancing the United States’ leadership in the responsible use and development of artificial intelligence on the global stage,” Howell said.