How Criminals are using AI to create Havoc.
- Christy Mackenzie

- Nov 4
- 4 min read
Updated: Nov 15

Every major shift in technology spawns an equal and opposite wave of abuse. Generative AI is no exception. It writes flawless emails, mimics human voices in seconds, fabricates video on demand, and automates tedious research at machine speed. For criminals and hostile operators, those strengths translate into scale, precision, and deniability.
The result is a new class of threats that feel personal and hit fast: deepfake CFOs ordering urgent wire transfers, cloned voices of family members begging for help, and political robocalls that borrow the voices of public figures to sow confusion on election day. This is not hypothetical. It is happening now criminals are using AI to create havoc .
The risk lens: why AI supercharges crime
Three dynamics matter.
Friction collapse. AI erases skill barriers. A novice can draft a convincing spearphish, craft fake invoices, or generate malware snippets with prompts, not expertise. Underground markets even advertise chatbots designed for abuse.
Believability at scale. Voice clones and video deepfakes are cheap and fast. A few seconds of audio can yield a passable clone; a few minutes can synthesize a boardroom full of executives. Fraud teams are seeing case counts surge.
Automation across the kill chain. AI agents do recon, write code, translate errors, and iterate in loops. That accelerates phishing, intrusion, lateral movement, and extortion. Defenders report shrinking response windows as attackers operate at machine speed.
What criminals are doing with AI right now
1) Deepfake BEC 2.0: voice and video that deceive
The classic business email compromise is evolving. Criminals now stage full video calls with cloned faces and voices of executives to authorize transfers. Finance teams have been duped into sending large sums after âcolleaguesâ and a âCFOâ confirmed orders on screen.
Consumer pivot. The same tools fuel âfamily emergencyâ scams. Voice and video cloning are showing up in fraud campaigns and political robocalls meant to confuse voters or depress turnout.
2) AI as a phishing factory
Generative models produce perfect grammar, localized slang, and industry jargon, tailored from a targetâs public footprint. Scam kits and AI site builders can spin up phishing pages that feel on-brand, not just lookalike.
Dark-market models. Forums advertise tools that jailbreak mainstream models or front authorized APIs through illicit wrappers. Individual products come and go, but the trend is clear: criminal interest is high and offerings are evolving.
3) Ransomware acceleration
Ransomware crews use AI to write cleaner lures, triage stolen data, auto-translate threats, and even prioritize victims based on public filings and cyber insurance signals. The practical effect: compressed attack timelines and higher conversion from access to extortion.
4) Disinformation and public-chaos plays
Synthetic media has lowered costs for state-aligned operators and criminal opportunists. In the wild: cloned political voices, fake endorsements, and fabricated press statements that move markets or suppress votes.
5) Fraud at the edges: synthetic IDs, marketplaces, and OTP interception
Payments and identity ecosystems are being probed with AI that fabricates convincing documents, remixes leaked data, and walks victims through âsupport chatsâ to capture one-time passcodes. Expect more e-commerce scams, synthetic identities, and automated dispute abuse.
6) Autonomy creeps in
Agentic systems can already chain tasks across the intrusion lifecycle: scanning for vulnerabilities, writing exploit code, testing payloads, and adapting to defender responses. This isnât sci-fi, itâs the visible direction of travel.
Near-term havoc scenarios to plan for
Emergency alert spoofs. AI-voiced âofficialsâ instructing citizens to evacuate or shelter in place, seeded across robocalls and social video. Outcome: panic, traffic gridlock, and diverted emergency resources.
CEO and vendor fraud on video. Finance teams pressured on a live call by a cloned supplier CFO while Slack/Teams messages from âcolleaguesâ pile on. Outcome: multi-account transfers that clear before any human callback.
Credential storms. Automated spearphish waves tuned per employee role and language, plus OTP harvesting through AI chat âsupportâ bots. Outcome: more initial access for sale to ransomware affiliates.
Election noise ops. Cloned political voices guiding voters to the wrong polling place or urging them to âvote by text.â Outcome: confusion and litigation, even if later debunked.
Brand-level confidence attacks. AI-polished fake press statements, investor calls, and âleaksâ designed to tank or pump a stock, amplified by synthetic personas. Outcome: market manipulation at machine speed.
Red flags and weak points criminals exploit
Trust without verification. Voice or video is no longer proof of presence.
Single-channel approvals. If money moves on the basis of one inbox, one chat, or one call, it is a target.
Crisis timing. End-of-day, quarter-close, or travel windows are chosen for maximum pressure.
Language and locale. AI nails idioms and regional nuance, reducing âforeign grammarâ tells.
Assumed visibility. Attackers scrape investor calls, conference talks, and past Zooms to train clones.
Minimum viable defenses that shift risk (no product pitches)
Policy, not trust, for money movement. Mandate out-of-band callbacks to known numbers in your ERP/CRM for any payment change, wire, or gift card request. No exceptions for âwe are on a plane.â Tie bonuses to adherence.
Multi-person verification for sensitive actions. Require two human approvers on different teams and channels for vendor onboarding, bank detail changes, or data-destructive actions.
No-video approvals. Treat video as theater. Verification must live outside the meeting: signed requests in ticketing systems, with cryptographic signatures where feasible.
Employee drills with synthetic media. Train people to challenge perfect audio and video. Reward stoppage, not speed. Kill âspeed at all costsâ culture in finance and IT.
Email and chat hardening. Enforce least privilege on third-party integrations, lock down external domains, and add delegated review for executive-impersonation keywords.
Telemetry that buys time. Monitor for data staging and exfiltration, and set early-warning detections on new MFA enrollments, mailbox rules, and mass OAuth consents.
Incident playbooks that assume AI. Include deepfake validation steps, legal and comms templates about synthetic media, and pre-cleared regulatory contacts.
What to watch next
Agentic tools for hands-off hacking. Expect more semi-autonomous intrusions against midsize enterprises, not just headline targets.
More âproxyâ operations. Criminal groups moonlighting for states, blurring lines between crime, influence, and sabotage.
Regulatory friction. New rules on AI voice, synthetic content labeling, and platform liability will reshape attacker economics, but cat-and-mouse dynamics will persist.
Closing
AI is not just your friendly chatbot helping your kids out with their math homework. Criminals are weaponizing AI for voice-clone heists, deepfake robocalls, and phishing at machine speed. Getting ahead of this and warning the public now in the best thing we can all do.


