Home ‘Bad bots’: AI’s villains beginning to enter the picture to perpetuate fraud online
US & World

‘Bad bots’: AI’s villains beginning to enter the picture to perpetuate fraud online

Rebecca Barnabi
artificial intelligence
(© Kaikoro – stock.adobe.com)

Artificial intelligence is on the rise and has brought technology into the daily lives of humans.

But has also allowed criminals another path to fine-tune online frauds.

“Bad bots” are created by hackers, including WormGPT, WolfGPT and FraudGPT to roll out phishing attacks and malware on unsuspecting individuals.

NordVPN outlines AI’s threats to consumers and offers advice on protecting yourself from the new era of scams.

“There has been a sharp rise in phishing emails over the last year, and AI tools have fuelled many of them. However, hackers are not just hoping to triumph through the sheer weight of numbers. With an AI’s power to analyze huge volumes of data, they can create ‘spear-phishing’ attacks highly tailored to individual targets, improving their chances of success,” NordVPN Chief Technology Officer Marijus Briedis said.

Voice cloning and deepfakes are among the ways criminals use AI to defraud individuals. Bad chatbots do not have safeguards of standard AI chatbots, which means they are tailor-made for cybercriminals to improve and extend the range of scams. Bad bots can analyze vast amounts of data and create more personalized and convincing phishing attacks to steal information such as bank details or persuade individuals to click on dangerous links. Bad bots also identify vulnerable websites for hackers to target and generate code to create new forms of malware with the potential to evade traditional detection.

“Worryingly, the red flags traditionally associated with these scams, like spelling mistakes and bad grammar, are quickly being wiped away and growing machine involvement means they can be rolled out on an industrial scale,” Briedis said.

However, WormGPT and FraudGPT do not yet know how to use large language model (LLM) technology which drives the most powerful AI chatbots, so fraudulent business emails are not so convincing.

AI’s bad guys are even targeting dating sites by starting cons on a genuine app, encouraging victims to continue conversations in another messaging service, then guides them to a fraudulent trading app with the promise of teaching them how cryptocurrency works.

“CryptoRoms are on the rise, so if someone approaches you on a dating app and tries to turn the conversation to crypto or directs you to download a trading app, make your excuses and leave. Sadly, the fake trading apps are finding their way through strict filters on the Google Play and Apple app stores,” Briedis said.

AI is encouraging highly personal voice cloning cons. A familiar voice in a video or voice message is used in a targeted attack to fool friends or family members.

“Voice cloning scams are a sick imitation game capable of cheating people out of thousands of dollars. They rely heavily on the emotional pull of hearing a familiar voice and the pressure being applied on the victim to take urgent action,” Briedis said.

He encourages Americans who think they may be the victim of a voice cloning con to ask a personal question to take the voice “off script” and expose the clone.

AI ‘deepfake” is for the more sophisticated cybercriminal who uses computer-generated copycats designed to mimic the look and sound of well-known individuals or celebrities. The con is an easy way to convince someone to give cash or valuable personal information. World leaders have been impersonated by deepfakes, including Presidents Biden, Trump and Obama.

“Deepfakes are becoming ever more sophisticated, but there are still some errors that can give the game away,” Briedis said.

“Look out for sudden head movements that can reveal blurry edges to the figure on film, or unusual changes to the lighting. The way an AI deepfake is grafted to the image of the real person can be startlingly smooth, making the final result seem shiny, or as if a social media filter had been applied to it. Mouth movements may also appear unnatural to the naked eye.”

Briedis offers more advice on how to avoid becoming a target for AI fraud

  • Be cautious about what you share online, both publicly and privately. Avoid posting sensitive personal information, such as your home address, phone number, or financial details on social media or other public forums. Also, try and consider whether it’s necessary to share information and how it might be used by others.
  • Keep all your devices and software up to date with the latest security patches and updates, as this helps protect against known vulnerabilities that hackers and AI-based attacks might exploit.
  • Never just click and accept privacy policies on websites without knowing what you are consenting to. As AI systems become more frequently used, there is an increased risk of sensitive data being mishandled or used inappropriately. Review the websites and services you use and opt out of data collection and sharing whenever possible.

Rebecca Barnabi

Rebecca Barnabi

Rebecca J. Barnabi is the national editor of Augusta Free Press. A graduate of the University of Mary Washington, she began her journalism career at The Fredericksburg Free-Lance Star. In 2013, she was awarded first place for feature writing in the Maryland, Delaware, District of Columbia Awards Program, and was honored by the Virginia School Boards Association’s 2019 Media Honor Roll Program for her coverage of Waynesboro Schools. Her background in newspapers includes writing about features, local government, education and the arts.