Cybercriminals no longer need advanced hacking skills—AI does the work for them. From deepfake scams to automated phishing attacks, artificial intelligence is making cybercrime faster, smarter, and harder to detect than ever before. Cybercriminals are no longer relying on outdated scams or manual hacking; instead, they are using AI to craft highly convincing phishing emails, generate deepfake videos, and even automate hacking processes.
The days of poorly written scam emails and obvious fraud attempts are over. AI allows attackers to mimic human behaviour with unsettling accuracy, whether it’s a scam email that sounds exactly like your boss, a phone call that perfectly replicates a loved one’s voice, or a fake video that looks completely real. These aren’t just isolated incidents—they’re part of a growing trend where cybercriminals use AI to exploit trust at an unprecedented scale.
In this blog, we’ll explore the most dangerous ways criminals are using AI against us and why staying aware is your best defence.
AI-Powered Phishing Attacks: Smarter Scams That Fool Even the Cautious
Phishing is no longer just about dodgy emails from foreign princes. AI is making these scams so sophisticated that even the most tech-savvy people are getting caught out.
AI can scrape data from your social media, emails, and online activity to create messages that sound just like they came from someone you trust. These aren’t the obvious “your account has been compromised” emails anymore. They reference real details about your life, making them incredibly hard to spot as scams.
Even worse, AI-generated deepfake voices are being used in phone scams. Imagine getting a call from what sounds like your boss or a family member, urgently asking you to transfer money or share a password. It’s happening now. One finance executive lost millions because he truly believed he was speaking to his CEO. AI made sure the voice on the other end was indistinguishable from the real thing.
AI-Driven Hacking: Smarter, Faster, and More Ruthless
Hackers used to manually break into systems. Now, AI does it for them. AI Hacking tools scan millions of networks in seconds, finding vulnerabilities before security teams even realise they exist.
Take WormGPT, a cybercriminal’s dream tool. It’s an underground AI designed for hacking. It crafts near-perfect scam emails, writes malicious code, and even automates social engineering attacks. Unlike ChatGPT, which has ethical safeguards, WormGPT is built to assist cybercriminals, not stop them.
Then there’s AI-powered password cracking. Traditional hacking tools try millions of password combinations at random. AI, on the other hand, predicts passwords based on stolen data, social media activity, and human habits. Even complex passwords aren’t safe if they follow predictable patterns.
ChatGPT and the Dark Side of AI: Bypassing Safeguards for Cybercrime
While ChatGPT and similar AI models have safeguards in place to prevent malicious use, cybercriminals have already found ways to bypass these protections. By using indirect prompts, rewording commands, or exploiting loopholes, attackers can trick AI into writing harmful code, generating phishing emails, or creating malware.
This means that even AI tools designed for good can be manipulated for criminal activity. Hackers can use ChatGPT to write convincing scam emails in multiple languages, craft malware scripts that evade detection, or even create guides for conducting cyber attacks. While developers continually update safeguards, cybercriminals are constantly testing new ways to bypass them—turning AI into both a defence and a weapon in the fight against cybercrime.
Deepfake Scams and Identity Fraud: You Can’t Believe Your Eyes (or Ears)
Deepfake technology is turning cybercrime into a high-stakes game. AI can now generate ultra-realistic fake videos and voice recordings, and criminals are using them to pull off scams that sound like something out of a Hollywood thriller.
In Hong Kong, a finance worker attended what he thought was a normal video conference call with his colleagues. Every person in the meeting was a deepfake. Their faces, voices, and mannerisms had been perfectly recreated using AI. The finance worker, thinking he was talking to real people, transferred $25 million to scammers. By the time anyone realised the truth, the money was gone.
Now, imagine receiving a video from a loved one urgently asking for money because they’re in trouble. Except it’s not them—it’s an AI-generated fake. Criminals are scraping social media videos and using AI to manipulate footage, making their scams more believable than ever.
AI-Enhanced Social Engineering: Manipulation on a Whole New Level
Social engineering is all about deception—tricking people into clicking a malicious link, sharing information, or making a bad decision. AI is making this insanely effective.
AI can comb through your online activity to understand your habits, interests, and relationships. This allows scammers to craft messages that feel incredibly personal.
For example, if you often post about your dog, an AI-generated scam email could pretend to be from a vet clinic, warning you about a health issue common to your dog’s breed. Worried, you click the link—and just like that, malware is on your device.
Even in the workplace, AI can generate fake emails that look exactly like they came from your colleagues. These messages might ask you to share login details, approve fake invoices, or click on a seemingly harmless attachment. The scary part? Even trained employees are falling for these AI-generated scams.
AI-Powered Cybercrime-as-a-Service: Hacking for Hire
Once upon a time, hacking required technical skills. Not anymore. AI-driven tools are now available for purchase on the dark web, meaning anyone can launch sophisticated cyberattacks without needing to write a single line of code.
This is called Cybercrime-as-a-Service (CaaS). Criminals can now rent AI-driven hacking tools for a fee. Want to send out thousands of AI-generated phishing emails? There’s a tool for that. Need AI-written malware that can bypass security systems? You can buy it. AI does the hard work for them, making it easier than ever for even the most inexperienced criminals to cause serious damage.
Stay Ahead of AI-Powered Cybercrime with Cyber Rebels
AI-powered cybercrime isn’t a future threat—it’s happening right now. Whether it’s deepfake scams, AI-generated phishing emails, or automated hacking, criminals are using AI to outpace traditional security measures.
At Cyber Rebels, we don’t just talk about cyber threats—we train people to fight back. Our expert-led cybersecurity awareness training equips businesses and individuals with the tools to recognise AI-driven scams, defend against phishing attacks, and detect deepfake fraud.
We offer interactive workshops, real-world attack simulations, and tailored security training to help you stay one step ahead of cybercriminals. Don’t wait until you’re the next victim—contact us today to secure your business, protect your employees, and outsmart AI-powered threats.
Director Of Training and Development
Andy Longhurst is a cybersecurity trainer, web designer, and co-founder of Cyber Rebels. With over a decade of experience in digital safety, education, and web technology, Andy delivers hands-on cybersecurity workshops for small businesses, startups, and corporate teams. Drawing on his background as a teacher and IT consultant, he helps organisations navigate real-world threats through practical, jargon-free training. Andy’s work empowers people to protect their digital lives with confidence. When not running training sessions or consulting on security strategy, he’s usually studying the latest cyber threats and tactics—or making another cup of tea.