Home Humans, not regulations, will be most important part of AI technology in U.S.

Humans, not regulations, will be most important part of AI technology in U.S.

Rebecca Barnabi
Artificial intelligence
(© Zobacz więcej – stock.adobe.com)

The U.S. Senate Select Committee on Intelligence held an open hearing Tuesday afternoon about artificial intelligence (AI).

The hearing’s subject was specifically titled “Advancing Intelligence in the Era of Artificial Intelligence: Addressing the National Security Implications of AI.”

“In many ways, the opportunities and technology of this is not new for this committee,” said Committee Chair Sen. Mark R. Warner of Virginia.

What has changed, however, for the committee, are the social, political and national security implications of AI.

Rapid advancements with AI technology have the potential to create innovation, but also increase ethical challenges.

“We as a body are seeking to rise to that risk,” Warner said Tuesday afternoon.

Foreign governments have capabilities to harness AI tools used by educational institutions in the United States for research, and access American information.

The question is: how can the Intelligence Community adapt to the technology? Warner said he is concerned about AI replicating the human voice and AI providing inaccurate health information.

How AI expands and exacerbates is a danger to the U.S.’s national security.

“AI capabilities, I think we all know, hold enormous potential,” Warner said.

Committee Vice Chair Sen. Marco Rubio of Florida said he is concerned with AI displacing humans from employment “by times infinity.”

“And that has national security implications,” Rubio said. He added that AI could eventually reach professions that previously were insulated against technology because of educational requirements.

Rubio said Hollywood writers and actors have been on strike all summer because they are partly driven by fear of being replaced by AI writers and actors.

Warner asked panel participants how the U.S. can democratize access with AI tools yet also have guardrails of regulation for AI.

“AI is going to become an open platform,” explained Dr. Yann LeCun, Vice President and Chief AI Scientist, Meta Platforms & Silver Professor of Computer Science and Data Science at New York University.

For example, the Internet began as a private platform of advertising in the late 1990s and evolved into an open platform for everyone to access. AI will follow the same progression.

Warner pointed out the massive antitrust issues involving AI and American elections and open markets.

“It’s a tough debate and a tough discussion,” said Dr. Jeffrey Ding, Assistant Professor of Political Science at George Washington University. He is less convinced that AI becoming an open platform will lessen concerns about the technology.

However, Ding said guardrails are possible.

“You’re worried about the guardrails. I’m not sure you have the [personnel] to get it out of the station,” said Dr. Benjamin Jensen, Senior Fellow at the Center for Strategic & International Studies and a professor at Marine Corps University School of Advanced Warfighting.

For Jensen, the humans involved with AI are more important than having guardrails against misuse.

“You have to have the people who know what they are doing,” Jensen said.

Education of the workforce in America must outpace America’s adversary when it comes to the use of AI technology.

Rubio said he is also worried about authoritarian regimes when one individual has an idea to go to war and thinks they could win the war. Some countries will use AI for war analysis, and it will be flawed.

“You could very well lead 21st Century conflicts [with AI],” Rubio said.

Jensen said the world will always have to worry about leaders using flawed data to make war.

Ding suggested always having humans involved with AI “and, hopefully, that will make these systems more robust.”

Sen. John Cornyn of Texas said AI began in 1956 but was barely spoken of for decades “and today we can’t talk about anything else.”

LeCun explained that frequent problems persisted with human-level intelligence in AI until now. In the last 10 years, deep learning was developed to train programs to use AI “and has been incredibly successful for narrow tasks.”

The slow development of human-level intelligence in AI is why AI vehicles and robots have not yet been developed.


Rebecca Barnabi

Rebecca Barnabi

Rebecca J. Barnabi is the national editor of Augusta Free Press. A graduate of the University of Mary Washington, she began her journalism career at The Fredericksburg Free-Lance Star. In 2013, she was awarded first place for feature writing in the Maryland, Delaware, District of Columbia Awards Program, and was honored by the Virginia School Boards Association’s 2019 Media Honor Roll Program for her coverage of Waynesboro Schools. Her background in newspapers includes writing about features, local government, education and the arts.