AI-Wielding Hackers are Here

Intelligent cybercrime tools exit proof of concept.

Maria Korolov

February 10, 2021

5 Min Read
ransomware
DAMIEN MEYER/AFP via Getty Images

In the summer of 2019, I wrote about the coming threat of AI-wielding hackers.

I predicted that hackers would soon be using artificial intelligence to create new viruses faster than antivirus makers can keep up, write extremely realistic and compelling phishing emails, and automatically scan corporate networks for weaknesses.

The only thing stopping the attackers from doing all that was the fact that they were still making so much money from traditional attacks, like ransomware.

Well, about a month later, at the RSA Conference, Symatec CTO Hugh Thompson talked about three recent cases he saw where hackers used computer generated "deep fake" voices to scam companies out of millions.

In one case, for example, someone pretending to be a company's CEO called an employee in the finance department and ordered an urgent wire for $10 million dollars.

"The CEO says on the phone, 'It's the end of our quarter, we need a wire transfer to go out within the next hour. It is critical. We have to pay this supplier and it has to happen before the quarter closes'," Thompon said.

In this case, the company had protocols and checks and balances in place, and the employee had to get someone else to sign off on the transfer.

"She convinced the colleague," Thompson said. "She said, 'I spoke to the CEO. What else do you need?'"

Senior executives often make public appearances, creating plenty of opportunities for hackers to get voice recordings and duplicate them. And if the potential payoff is several million dollars, what's to stop the hacker from calling the executive on some pretext and engaging them in conversation to get a recording?

Now, with more employees working remotely, there are fewer opportunities to walk down the hall to ask about an unusual request.

Deep-faked phone could also be used to get employees to change payment information on accounts, to open phishing emails, to reset passwords, turn off security measures, and a host of other malicious activities.

Criminals use Lyrebird to do this, said Maria Bada, senior research associate at the Cambridge Cybercrime Centre at Cambridge University.

"It's an application that allows anyone to save his or her voice so that a robot speaks typed sentences," she told DCK.

But deep fakes are just the sexiest, most visible part of the evil AI iceberg.

There's more. A lot more.

Nastier Ransomware

To make the biggest possible impact, ransomware has to spread as far as it can in an enterprise before it goes off.

Cybercriminals are already using AI to do that, said Amr Ahmed, managing director at Ernst & Young Consulting Services.

"They use the AI to see the responses of the firewalls and find open ports that have been overlooked by the security team," said Ahmed. "There are many situations where the firewall rules are in conflict in the same corporation. The best way to exploit that is with AI. Many of the penetrations that happened recently are using AI to exploit firewall rules."

Other attacks are obviously AI-powered because of their volume and sophistication, he said, and in other cases, the AI has been built into exploit kits sold in the underground marketplaces.

"The criminals have decided that this is a very lucrative business," said EY Consulting's senior manager Mounir Elmously. "They've created SDKs [software development kits] for ransomware that are already full of AI."

Cracking Passwords

The more data an AI system has, the smarter it gets. Well, there's a wealth of stolen passwords out there for the taking.

This spring, researchers from the Stevens Institute of Technology will be presenting a new and improved version of their PassGan project. PassGAN was born in 2017, based on training data from leaked passwords – and was then tested against another set of leaked passwords from LinkedIn. It was able to crack 27 percent of the LinkedIn passwords.

PassGAN has since been upgraded to use a similar form of reinforcement learning to that which AlphaZero used to learn to play chess – and can now automatically adapt during an going attack to keep getting better at guessing the passwords.

This is yet another reason for data centers to move away from traditional passwords to multi-factor and password-less authentication systems.

Automated Attacks

Hackers are also using AI and machine learning to automate attacks against enterprise networks, said Derek Manky, global security strategist at Fortinet.

"We saw proof of concept two years ago," he told DCK. "Now we've seen it put into day-to-day practice."

AI and ML lets cyber-criminals create malware that is smart enough to hunt for vulnerabilities and decide which payload to deploy to take advantage of them. That means that malware doesn't have to communicate back to command and control servers and can evade detection.

"And attacks can be laser focused rather than taking the usual slower, scattershot approach that can alert a victim that they are under attack," he added.

AI-Powered Phishing Emails

Employees have learned to spot phishing emails, especially those generated in bulk.

AI offers attackers the opportunity to custom-craft each email specifically for each recipient.

"That is where we're starting to see the first real weaponization of ML models," Manky said. That includes reading an employee's social media posts, or, in the case of attackers who have already compromised a network, reading all their emails.

Attackers can also use AI to interject themselves into ongoing email threads. An email that's part of an ongoing discussion automatically sounds authentic.

"They’re trying to gain entry to a system, go from one infected user to another, and email thread hijacking is a great way of doing that," he said.

AI Fuzzing

Another way that attackers are using AI is to find new software vulnerabilities.

"They're weaponizing AI to hack, to mine for zero-day threats," he said.

Legitimate software developers and pen testers already have fuzzing tools available to help them secure their own applications and systems – and anything that the good guys can get, the bad ones can, too.

AI and related systems are becoming ubiquitous in the global economy, said Brad LaPorte, chief evangelist at Kasada and a former Gartner cybersecurity analyst, and the same thing is happening in the criminal underground.

"The source code, data sets, and approaches to developing and maintaining these powerful capabilities are open and freely available," he said. " If you are financially motivated to use this for malicious purposes, then this is where the investments and focus will be.

Data centers need to embrace a zero-trust approach for detecting malicious automation, he said.

About the Author

Maria Korolov

Maria Korolov is an award-winning technology journalist who covers cybersecurity, AI, and extended reality. She also writes science fiction.

https://www.mariakorolov.com/

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like