A world of automation is coming.
Behind automation, including in technology and the auto industry, is artificial intelligence or AI.
Dr. Cayce Myers is a professor of public relations and director of graduate studies Virginia Tech’s School of Communication. His courses include media history, political communication, laws that affect public relations practice, public relations, media law and discussions on AI.
“Artificial intelligence is a huge part of a new communications revolution,” Myers said. Understanding AI is essential for his students to know and to have awareness of how AI is evolving with law and policy in the United States.
According to Myers, many think that AI will take jobs from humans, but what is possible is for it to take parts of jobs, like online content creation.
“People don’t know how it’s going to evolve,” Myers said.
What we will see is regulation of AI in certain areas. He said that AI-created content cannot receive copywright, and concerns include privacy and discrimination.
Movies have focused on the weaknesses of AI, how robots might take over humanity and other fears centered on fear of the unknown.
“I think there’s a lot of benefit,” Myers said. AI can help humans become better writers, create technology jobs and streamline many jobs.
“It’s like with any tool it has positives but it also has negatives and it’s got disruptors.”
He said that humans need to get out of the mindset of having fear of AI. Fear of the unknown is typical with any new technology but we can consider where the technology can take us.
Myers said that guardrails will be necessary with AI, as well as regulations from government. However, the technology is developing so fast and that is scary.
In the media, AI is portrayed as entertainment and telling humans what to do. In the next five years, the federal government and some state governments will have regulations for AI in place. “A real attempt to regulate, but to regulate in a way that doesn’t stifle,” Myers said.
A concern with AI is spotting what he calls deep fakes by separating fact from fiction.
“It is becoming increasingly difficult to identify disinformation, particularly sophisticated AI generated deep fake,” Myers said. “The cost barrier for generative AI is also so low that now almost anyone with a computer and internet has access to AI.”
A lot more disinformation, in the forms of visual and written, are possible and users will need more media literacy to spot the disinformation. Two sources create disinformation: humans and AI companies.
“Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation,” he said. “However, that is not going to be enough. Companies that produce AI content and social media companies where disinformation is spread will need to implement some level of guardrails to prevent the widespread disinformation from being spread.”
But with AI technology developing so fast, a full proof way of preventing the spread of disinformation is a challenge. And regulation of AI will have its own challenges.
“The issue is that lawmakers do not want to create a new law regulating AI before we know where the technology is going. Creating a law too fast can stifle AI’s development and growth, creating one too slow may open the door for a lot of potential problems. Striking a balance will be a challenge,” Myers said.