Home Assurance in artificial intelligence becomes focus of Commonwealth Cyber Initiative workshop
Local

Assurance in artificial intelligence becomes focus of Commonwealth Cyber Initiative workshop

Contributors
computer essay writing translation
Photo Credit: Wrangler/Adobe Stock

Artificial intelligence is spreading through our homes and infrastructure, powering systems that can adapt to external input, whether it’s a car swerving into the next lane, a punishing rainstorm, or a heavy load of laundry.

Software driven by artificial intelligence, or AI, has become sophisticated enough to match human performance — or even exceed it — in certain areas. But because AI is fundamentally different and more elusive than the generations of software that preceded it, the problem is that we don’t always know how, or if, it’s working.

Figuring out when we can trust AI, and when we probably shouldn’t, is a field called AI assurance. It’s becoming increasingly critical as AI takes on greater roles in areas like transportation and national security, where trustworthiness has a direct impact on safety.

For that reason, research and development in AI assurance is a central priority for the Commonwealth Cyber Initiative (CCI), a statewide network that accelerates research, innovation, commercialization, and talent development in advanced cybersecurity technologies.

The CCI’s AI assurance projects will explore how we know whether AI is reliable, whether its decisions are explainable, whether it adequately safeguards privacy, and whether it’s equitable, among other questions.

“We need to understand how the algorithms work and when they break,” explained Laura Freeman, a research associate professor of statistics in the College of Science and associate director of the Intelligent Systems Lab at the Hume Center for National Security and Technology. “We need to understand that before we start fielding systems that can impact human safety.”

At a workshop focusing on “Trustworthy AI,” more than two dozen experts from seven universities involved in the CCI came together at the Virginia Tech Research Center – Arlington to consider how the organizations’ wealth of resources in computing, cybersecurity, and data science might be effectively leveraged for AI assurance.

“How do we as an organization of all these universities actually organize to tackle these critical cross-disciplinary research questions?” said Freeman of the workshop’s goal. “We have outstanding research programs across the departments and universities that make up CCI, we have space in Arlington – how do we put it together to tackle AI assurance?”

The CCI network has regional nodes in southwestern, central, coastal, and northern Virginia, and a hub being incubated at the Virginia Tech Research Center – Arlington. Virginia Tech, Virginia Commonwealth University, Old Dominion University, and George Mason University each anchor one of the regional nodes; collectively, the CCI engages students and faculty from 39 institutions of higher education, combining technical expertise in a range of emerging issues — including AI assurance.

AI assurance evolved from the field of software assurance — strategies for ensuring that systems are trustworthy and free of vulnerabilities when lines of code, rather than mechanical components, make the difference between success and failure.

But the tools developed to assess the integrity of traditional software, which is designed to give predictable output and has fixed code that can be checked line by line, are not sufficient for AI.

In AI-powered systems, new input often gives unpredictable output — randomness doesn’t necessarily signal a problem, as it would for traditional software. And the algorithms guiding AI are buried deep within the system, built layer by layer through cycles of machine learning. There’s no blueprint for what they’re supposed to look like, no master copy to check them against. The result is that when algorithms make mistakes, we can’t explain them or find ways to correct them.

This fundamental uncertainty is often called the “black box” of AI. It makes it difficult to assess when software is working properly and creates opportunities for manipulation — if we don’t know when the system is working, we can’t tell when it’s been compromised.

Testing AI algorithms takes expertise in data science and model design, powerful computing resources, and a strong understanding of the nuances of human-computer interactions and the real-world applications for this software. All of that expertise is available in the CCI’s robust network of talent, and the organization’s structure creates a framework for fluid collaboration.

The Arlington workshop provided a forum to spark some of those collaborations, as researchers had the opportunity to talk with colleagues at other universities tackling similar or complementary problems. The group dove into discussions on topics ranging from AI policy to assessing whether algorithms are fair and unbiased, and explored strategies for creating a self-sustaining loop of foundational and applied research and cultivating the next generation of technology and talent in AI assurance.

“I was inspired by the group,” said Milos Manic, a professor of computer science at Virginia Commonwealth University. “I hope that this initiative will provide a platform for academic, government, and industry groups to pool their collective passion and expertise to find solutions for AI assurance that will benefit everyone.”

Contributors

Contributors

Have a guest column, letter to the editor, story idea or a news tip? Email editor Chris Graham at [email protected]. Subscribe to AFP podcasts on Apple PodcastsSpotifyPandora and YouTube.