It is comforting to think that criminals, even cybercriminals, are dumb. After all, they weren’t smart enough to make it in the “real world” — they didn’t launch their own lawful businesses or apply any skills or talents to legitimate trades. Instead, they prey on those who earn honest livings by pilfering whatever money and valuables they can.

Unfortunately, it is becoming increasingly clear that many criminals, especially cybercriminals, aren’t dumb; in fact, they could be cleverer and have greater skill or talent than the average person. What’s more, they can now craft even smarter tools to help them — by using artificial intelligence.

Cybersecurity experts have long predicted the application of AI in nefarious deeds, and slowly but surely, those predictions are coming true. Recently, a number of AI-backed malware and hacking attempts have been unleashed, much to the dismay of businesses, consumers and cybersecurity pros.

Identity Theft and Financial Fraud 

In March, a CEO at a U.K.-based energy firm received a call from his superior, a chief executive from the firm’s German parent company. In the call, the chief executive ordered an urgent transfer of a large sum of money to a Hungarian supplier; in subsequent calls, the executive assured the CEO that the transfer would be reimbursed shortly, but not before additional transfers were requested. Eventually, the CEO became suspicious; though he recognized his boss’s voice, he finally refused to make a transfer and contacted authorities.

As it turns out, the voice calling the CEO wasn’t the German chief executive — it wasn’t a person at all. Rather, this scam took advantage of deepfake technology, which relies on AI to produce realistic imitations of people’s voices, faces and more. In this case, it seems fraudsters managed to manipulate deep fake AI to steal someone’s identity, and they effectively swindled $243,000 from the U.K. into Hungary and then into Mexico, where authorities have lost its trail. 

Deepfakes are extremely distressing to most security experts, and this recent application of the tech seems to support their fears of increased opportunity for identity theft and other major crimes. Unfortunately, there aren’t many viable strategies for preventing deepfakes as yet besides limiting the availability of one’s face and voice or else questioning every message, video and audio in existence. It might be useful to install a powerful security tool from trendmicro.com on business and personal devices to prevent leaks of personally identifying data. Until security experts or legislators do something about deepfakes, this AI hacking method will likely flourish.

IoT Hacking With ATMs

Many people think of ATMs as relatively simple machines, but the truth is that modern ATMs are more like exceedingly complex IoT devices, connected to the internet but containing cold, hard cash. ATM hacking stories aren’t new; they go back essentially to the first ATMs. Though most ATMs use a number of security techniques to keep money safe, to include firewalls, encryption and several physical defenses, ATMs remain prime targets for attack.

Recently, a new method of attacking ATMs has developed, called jackpotting. While exact protocol can differ from attacker to attacker, generally jackpotting entails installing malware on an ATM to gain control over the mechanical component that dispenses cash. A jackpotter might log into their legitimate bank account, but instead of receiving $20, they withdraw thousands of dollars at once. Experts estimate that the combined losses from jackpotting in the U.S. amount to more than $1 million — and as jackpotters begin applying AI to their attacks, this number will climb.

Money Laundering Through Bank Software

Another distressing application of AI isn’t necessarily to attack or steal from businesses or consumers but to allow criminals to get away with other sorts of crimes, like illegal drug sales or sex trafficking. Money laundering is a crucial step in making ill-gotten gains appear legitimate; in the past, criminals would channel their profits from crime through certain investments, but through the application of AI, criminals can deposit dirty money directly into financial institutions and move the money around to make it appear more legitimate. In one case, an AI transferred various sums amongst more than 250 accounts, using labels like “present for Dad” or “new car” to make the transfers more realistic. This made the money all but impossible to track, as when the criminals finally took the money out of the bank, authorities struggled to identify where the money came from to begin with. 

Fortunately, in this case and many others, AI can be used by the good guys, too. Already, several large financial institutions apply AI to identify accounts in the process of laundering money in addition to several other illegal financial activities. Even as AI used for nefarious purposes advances, the AI tools used to keep people safe advance accordingly.