Home Beyer, Eshoo introduce legislation to establish AI foundation model standards

Beyer, Eshoo introduce legislation to establish AI foundation model standards

Chris Graham
artificial intelligence
(© Kaikoro – stock.adobe.com)

AI is either something you think could make the world more efficient, or it scares the hell out of you, because you remember the “Terminator” movies.

Congress is trying to get its head around AI like the rest of us are.

“AI offers incredible possibilities for our country, but it also presents peril. Transparency into how AI models are trained and what data is used to train them is critical for consumers and policy makers,” said Anne Eshoo, a California Democrat who is joining Virginia Democrat Don Beyer in the introduction of the AI Foundation Model Transparency Act, a bill aimed at promoting transparency in artificial intelligence foundation models.

The AI Foundation Model Transparency Act, which Eshoo and Beyer, who are leaders in the Congressional AI Caucus, introduced last week, directs the Federal Trade Commission and National Institute of Standards and Technology to establish standards for data sharing by foundation model deployers.

Foundation models are AI models trained on broad data. They power the generative AI websites and chatbots that have drawn international focus over the past year. Information about the data these models are trained on generally is not available to the public.

The problem with AI models is, they can and often do produce inaccurate, imprecise and even biased responses due to limitations or biases in the model’s training data or how the model was trained. This can then result in race or gender bias, which can have serious real-world impacts in areas including health-related AI inferences, loan granting, housing approval, or predictive policing.

The AI Foundation Model Transparency Act would direct the Federal Trade Commission, in consultation with the National Institute of Standards and Technology and the Office of Science and Technology Policy, to set standards for what information high-impact foundation models must provide to the FTC and what information they must make available to the public. Information identified for increased transparency would include training data used, how the model is trained, and whether user data is collected in inference.

“Artificial intelligence foundation models commonly described as a ‘black box’ make it hard to explain why a model gives a particular response. Giving users more information about the model—how it was built and what background information it bases its results on—would greatly increase transparency,” Beyer said. “This bill would help users determine if they should trust the model they are using for certain applications, and help identify limitations on data, potential biases, or misleading results. When a model’s bias could lead to harmful results like rejections for housing or loan applications, or faulty medical decisions, the importance of this reform becomes clear and very significant.”

Text of the AI Foundation Model Transparency Act is available here, with a one-pager on the bill here.

Chris Graham

Chris Graham

Chris Graham, the king of "fringe media," is the founder and editor of Augusta Free Press. A 1994 alum of the University of Virginia, Chris is the author and co-author of seven books, including Poverty of Imagination, a memoir published in 2019, and Team of Destiny: Inside Virginia Basketball’s Run to the 2019 National Championship, and The Worst Wrestling Pay-Per-View Ever, published in 2018. For his commentaries on news, sports and politics, go to his YouTube page, or subscribe to his Street Knowledge podcast. Email Chris at [email protected].