A New Fraud Multiplier
Artificial intelligence is not inventing new kinds of crime so much as making old ones faster, cheaper and more convincing.
That is the picture emerging from a string of recent reports documenting how generative A.I. is being used across the fraud economy: to carry out persuasive phishing conversations, to help North Korea-linked hackers build malware and fake websites, and to create synthetic online personas that can lure victims into romance-style or parasocial scams.
What ties these seemingly different schemes together is the way A.I. lowers two barriers at once. It can reduce the technical expertise needed to build malicious tools, and it can sharpen the social manipulation at the heart of many scams. In effect, the technology is helping would-be criminals sound more polished, appear more credible and move more quickly.
The Federal Bureau of Investigation signaled the scale of the problem in its newly released 2025 Internet Crime Complaint Center report, which for the first time included a dedicated section on A.I.-related crime. The bureau said it had received 22,364 complaints involving A.I., representing roughly $893 million in adjusted losses. Those complaints spanned business email compromise, romance and confidence schemes, and so-called distress scams in which cloned voices are used to impersonate loved ones in urgent need of money.
The figures almost certainly understate the true toll. Many victims may never know that an email, voice message, flirtatious profile or fraudulent website was generated or refined by software.
From Better Phishing to Better Pretending
One recent test of leading A.I. systems underscored the concern among cybersecurity researchers that the danger is not only what models can code, but how well they can talk.
Security experts have spent years debating whether generative A.I. would significantly improve malware writing or exploit development. But another threat is becoming harder to dismiss: the models’ growing fluency in persuasion. In controlled interactions, several systems were able to sustain plausible scam or phishing-style exchanges, adapting tone and language in ways that could make social engineering more effective.
That matters because many cyberattacks still begin not with a sophisticated technical exploit but with an ordinary human mistake — a click, a reply, a wire transfer, a downloaded file. If A.I. can help attackers produce more believable messages at scale, then mediocre operators may be able to achieve the polish that once required practiced fraudsters.
Federal authorities had already warned this was coming. In late 2024, the F.B.I. said criminals were using generative A.I. to create convincing text, fake social media profiles, fraudulent websites, synthetic images and cloned voices for use in phishing, romance scams, investment fraud and impersonation schemes. More recently, Google’s 2026 threat forecast pointed to A.I.-enabled social engineering as a growing risk, particularly when paired with financially motivated operations linked to North Korea.
North Korean Hackers and a Faster Assembly Line
That linkage is no longer theoretical.
Security researchers recently described a North Korea-linked campaign in which attackers used A.I. tools for a wide range of tasks, including writing malware, generating code more quickly and building fraudulent company websites designed to trick targets. The security firm Expel estimated that one such operation stole as much as $12 million over a three-month period.
The significance of that finding is not merely the dollar amount. North Korean cyber operations have long been associated with cryptocurrency theft and other financially motivated intrusions used to generate revenue for the regime, which faces heavy international sanctions. What appears to be changing is the efficiency of the workflow.
A.I. can help attackers spin up infrastructure more quickly, draft convincing corporate language, imitate legitimate hiring or business fronts and troubleshoot code without needing top-tier engineering talent. In that sense, the technology acts as a force multiplier for operators who may be persistent but not especially sophisticated. It narrows the gap between elite hacking groups and lesser-skilled affiliates or copycats.
If that trend continues, cybercrime may become less dependent on rare technical expertise and more reliant on speed, iteration and volume.
The Synthetic Influencer as Scam Vehicle
At the other end of the spectrum, A.I. is also being used in grifts that are less technical and more intimate.
Recent reporting has shown scammers monetizing fully generated online identities, including attractive political or lifestyle-themed personas tailored to specific audiences. In one example, a creator used generative tools to fabricate the photos and videos of a young conservative woman, then sold access to that persona to men who believed she was real.
The scam is contemporary in its tools but familiar in its psychology. Romance fraud and confidence schemes have long depended on the victim’s willingness to project authenticity onto someone they know only through a screen. Generative A.I. makes it easier to sustain that illusion with a steady stream of customized images, messages and videos, all without needing a real accomplice or stolen photographs from an existing person.
It also allows scammers to target narrower niches. A fabricated persona can be designed not just to appear attractive, but to match a specific ideology, subculture, age group or fantasy. In that way, A.I. may enable a more segmented, data-driven version of the old catfishing playbook.
Why This Moment Feels Different
Fraud rings have always adopted new communications tools quickly, from email to social media to encrypted messaging. But A.I. is proving unusual in the breadth of tasks it can assist with. The same family of tools can draft a phishing note, create a fake headshot, clone a voice, write snippets of malware, generate a website’s marketing copy and translate a scam into fluent local language.
That convergence is what worries investigators. The risk is not one breakthrough technique, but the formation of a more automated attack pipeline in which synthetic content, technical tooling and psychological manipulation reinforce one another.
How far that pipeline will scale remains uncertain. Major model providers have imposed guardrails meant to block overtly malicious uses, and companies have stepped up account enforcement. But open-source or lightly restricted models remain available, and malicious users can often rephrase requests, combine multiple tools or move to less-policed platforms.
Defenders, meanwhile, face a difficult asymmetry. Detection systems may catch known malware or suspicious domains, but it is harder to flag a charming message, a convincing video or a cloned voice before damage is done. And public awareness campaigns can lag behind the technology’s pace, especially as synthetic content grows more naturalistic.
An Old Crime Wave, Newly Accelerated
For now, the broadest lesson is that A.I.-enabled fraud should be understood less as a separate category than as an accelerant. The underlying crimes — phishing, impersonation, extortion, romance scams, credential theft, fake investment pitches — are all well established. What is changing is the cost of producing believable deception.
That shift has consequences beyond individual victims. Businesses may face more convincing email compromise attempts. Job seekers may encounter increasingly polished fake recruiters and fraudulent company sites. Families may struggle to distinguish a real emergency call from a cloned voice. And online platforms, already flooded with impersonation and spam, may find it harder to separate authentic users from synthetic ones.
The technology industry has spent much of the last several years advertising A.I. as a productivity engine. In the criminal world, it is becoming one too.
Sources
Further reading and reporting used to add context:
- https://www.wired.com/story/ai-generated-maga-girls/
- https://www.wired.com/story/ai-tools-are-helping-mediocre-north-korean-hackers-steal-millions/
- 5 AI Models Tried to Scam Me. Some of Them Were Scary Good | WIRED
- https://www.weforum.org/stories/2026/02/ai-supercharging-global-cyber-fraud-crisis-could-also-solve-it/
- https://pithwire.com/ja/articles/this-scammer-used-an-ai-generated-maga-girl-to-grift-super-d/
- https://www.memeorandum.com/260421/p40
- https://www.reddit.com/r/AiNews24x7/comments/1ssb5qd/this_scammer_used_an_aigenerated_maga_girl_to/
- https://www.reddit.com/r/technology/comments/1srpnzi/this_scammer_used_an_aigenerated_maga_girl_to/
- https://www.axios.com/2026/02/13/valentines-day-romance-scam-ai-deepfake
- https://www.forbes.com/sites/timkeary/2026/04/07/fbi-reports-208-billion-lost-to-cybercrime-as-hackers-turn-to-ai/
- https://www.pcgamer.com/software/ai/ai-assisted-hacking-group-hits-targets-with-a-complicated-social-engineering-scam-that-involves-deepfaked-ceos-spoofed-zoom-calls-and-a-malicious-troubleshooting-program/
- https://www.reddit.com/r/ai_talk_monitor_dev/comments/1ss5vnr/ai_digest_4222026/
- https://www.techradar.com/pro/security/in-2026-cybercrime-has-reached-a-point-of-total-convergence-new-research-claims-ai-attacks-are-taking-over-so-how-can-your-business-stay-safe
- https://www.techradar.com/pro/security/deepfake-worries-hit-a-new-high-as-one-in-four-americans-say-they-have-received-a-deepfake-voice-call-in-the-past-12-months-experts-blame-the-weaponization-of-ai
- https://www.reddit.com/r/antitrump/comments/1sru23a/magaverse_is_getting_duped_not_only_within_the_us/
- https://www.reddit.com/r/Trumpvirus/comments/1sru2it/magaverse_is_getting_duped_not_only_within_the_us/
- https://www.reddit.com/r/MarchAgainstNazis/comments/1sru2bw/magaverse_is_getting_duped_not_only_within_the_us/
- https://cointelegraph.com/news/ai-enabled-crypto-scams-rise-500-percent
- https://www.forbes.com/sites/technology/article/ai-generated-scams/?ss=ai
- https://www.axios.com/2026/04/22/openai-gpt-cyber-government-meeting
- https://www.fbi.gov/news/press-releases/cryptocurrency-and-ai-scams-bilk-americans-of-billions
- https://www.techradar.com/pro/security/scammers-build-fake-fbi-crime-reporting-portals-to-steal-personal-info-warns-fbi
- https://socradar.io/blog/fbi-ic3-2025-internet-crime-report-10-takeaways/
- https://www.techradar.com/pro/security/openai-bans-chinese-north-korean-hacker-accounts-using-chatgpt-to-launch-surveillance
- https://www.ic3.gov/PSA/2025
- https://www.asianfin.com/news/259882
- https://www.axios.com/2026/03/29/claude-mythos-anthropic-cyberattack-ai-agents
- https://www.fox5dc.com/news/fbi-report-adds-ai-enabled-scams-first-time-893-million-reported-stolen
- Internet Crime Complaint Center (IC3) | Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud
- https://www.govtech.com/security/fbi-crypto-ai-scams-drove-billions-in-losses-in-2025
- https://www.kold.com/2026/01/08/ai-linked-surge-internet-crimes-arizona-fbi-says/
- https://www.aha.org/cybersecurity-government-intelligence-reports/2025-05-12-2024-fbi-internet-crime-report
- https://nflo.tech/knowledge-base/artificial-intelligence-cyberattacks-nation-state-groups-google-report/
- https://www.aol.com/news/fbi-report-highlights-900m-lost-155510626.html
- https://www.fdic.gov/financial-reports/2025-annual-report.pdf
- https://www.reddit.com/r/CryptoScams/comments/1sfrxlh/worthless_fbiicegov/
- https://arxiv.org/abs/2603.11528
- https://cloud.google.com/blog/topics/threat-intelligence/dprk-adopts-etherhiding
- https://cloud.google.com/blog/topics/threat-intelligence/apt37-overlooked-north-korean-actor/
- https://cloud.google.com/blog/topics/threat-intelligence/apt45-north-korea-digital-military-machine
- https://cloud.google.com/blog/topics/threat-intelligence/
- https://cloud.google.com/blog/topics/threat-intelligence/apt43-north-korea-cybercrime-espionage
- https://cloud.google.com/blog/topics/threat-intelligence/apt38-details-on-new-north-korean-regime-backed-threat-group
- https://openai.com/index/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors/
- https://openai.com/index/bringing-chatgpt-to-genaimil/
- https://cloud.google.com/blog/products/identity-security/introducing-google-threat-intelligence-actionable-threat-intelligence-at-google-scale-at-rsa/
- https://cloud.google.com/blog/topics/threat-intelligence/cybersecurity-forecast-2026/
- https://cloud.google.com/blog/topics/threat-intelligence/dprk-it-workers-expanding-scope-scale
- https://openai.com/news/security/
- https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf?_bhlid=f3b3d687377f4f0ff127d45ab9a6a1ff99dd3fca
- https://cdn.openai.com/threat-intelligence-reports/7d662b68-952f-4dfd-a2f2-fe55b041cc4a/disrupting-malicious-uses-of-ai-october-2025.pdf
- https://cdn.openai.com/threat-intelligence-reports/disrupting-malicious-uses-of-our-models-february-2025-update.pdf?trk=public_post_comment-text
- https://cdn.openai.com/threat-intelligence-reports/disrupting-malicious-uses-of-our-models-february-2025-update.pdf
- https://cdn.openai.com/global-affairs/f9361fe7-e452-4c78-94dc-e6946c73c858/openai-south-korea-economic-blueprint-october-2025.pdf














Leave a Reply