Home Researchers, Sen. Warner encourage inclusion of AI in digital copyright legislation
State/National News

Researchers, Sen. Warner encourage inclusion of AI in digital copyright legislation

Rebecca Barnabi
Artificial intelligence
(© Zobacz więcej – stock.adobe.com)

Every three years, the Digital Millennium Copyright Act (DMCA) goes through a rulemaking process to authorize exemptions that allow individuals and researchers to circumvent technical protection measures on copyrighted material without risking liability.

This year, Artificial Intelligence (AI) researchers have petitioned for a new exemption relating to “Security Research Pertaining to Generative AI Bias.” U.S. Sen. Mark R. Warner of Virginia, who has led the charge in the Senate to explore the capabilities of AI technology while simultaneously advocating for reasonable guardrails around its usage, argues that expanding the current good-faith research exemption to cover research that falls outside of traditional security concerns, such as bias and other harmful outputs, is the best way to ensure safe and equitable AI while enabling its continued innovation, public trust and adoption.

Warner, Chairman of the Senate Select Committee on Intelligence and co-chair of the Senate Cybersecurity Caucus, wrote to the U.S. Copyright Office in support of expanding the existing good-faith research exemption within the DMCA.

“Due to the difficulty in understanding the full range of behaviors in AI systems – particularly as models are introduced in contexts that diverge from their intended use – the scope of good-faith research has expanded to the identification of safety flaws caused by misaligned AI systems, as well as research into how AI systems can reflect and reproduce socially and economically harmful biases…it is crucial that we allow researchers to test systems in ways that demonstrate how malfunctions, misuse, and misoperation may lead to an increased risk of physical or psychological harm,” Warner wrote.

He added that the Department of Justice letter emphasized a hallmark of the research exemption has been good faith of security researchers.

“In the absence of regulation, many AI firms have voluntarily adopted measures to address abuse, security, and deception risks posed by their products. Given the growing use of generative AI systems for fraud, non-consensual intimate image generation, and other harmful and deceptive activity, measures such as watermarks and content credentials represent especially important consumer protection safeguards. While independent research can meaningfully improve the robustness of these kinds of authenticity and provenance measures, it is vital that the Copyright Office ensure that expansion of the exemption does not immunize research that intends to undermine these vital measures; absent very clear indicia of good faith, efforts that undermine provenance technology should not be entitled to the expanded exemption.”

This is the latest step in Warner’s efforts to reign in big tech and better understand the impacts of rapidly expanding usage of AI. Earlier this month, he introduced the Secure Artificial Intelligence Act of 2024, legislation to improve the tracking and processing of security and safety incidents and risks associated with Artificial Intelligence (AI).

Rebecca Barnabi

Rebecca Barnabi

Rebecca J. Barnabi is the national editor of Augusta Free Press. A graduate of the University of Mary Washington, she began her journalism career at The Fredericksburg Free-Lance Star. In 2013, she was awarded first place for feature writing in the Maryland, Delaware, District of Columbia Awards Program, and was honored by the Virginia School Boards Association’s 2019 Media Honor Roll Program for her coverage of Waynesboro Schools. Her background in newspapers includes writing about features, local government, education and the arts.