All Posts
Email Security
Threat Research
Ai
The Anatomy of an AI Attack
Anatomy of a Hack: How AI Clones Your Company in 3 Steps
Written by
Vito Prasad
Published on
December 16, 2025

Editor’s Note: This is Part 2 of our 5-part series on “AI-Powered Spearphishing at Scale.” In Part 1, we explored the S-curve of criminal adoption. Today, we break down the pipeline using a live demonstration to show exactly how these attacks are built.

If You Want to Understand the Modern Threat, Stop Picturing “Hackers in Hoodies”

Watch the full attack pipeline demonstration in under a minute.

You need to start thinking about automation pipelines.

In the past, a high-quality spearphishing attack was expensive. A human operator had to research a target, understand their role, draft a convincing lure, and code a malicious landing page. It worked—but it didn’t scale.

Generative AI has solved the scalability problem.

According to our analysis, AI has reduced time-to-attack from hours to seconds. By automating three core stages, a single bad actor can now run mass-scale campaigns with the precision of a sniper.

Here is exactly how that pipeline works.

Step 1: Automated Social Engineering at Scale (OSINT)

JSON Dossier output

The first stage is data gathering. In the cyber world, this is Open-Source Intelligence (OSINT). Attackers use AI agents to scrape and synthesize public data on a target to find a "hook."

As seen in the vidoe's opening, we targeted a real profile: Cy Khormaee. We simply asked the AI to "Gather recent news." Within seconds, it returned a structured dossier containing:

  • His company's recent $13M seed round.
  • A recent appearance on the “Curiosity-Driven GTM” podcast.
  • An upcoming speaking role at the eCrime 2025 conference.

This isn’t a generic scrape. The AI is identifying context. The podcast appearance isn't just news—it's the perfect pretext for a "transcript review" lure.

Step 2: Context-Aware Generation

"Conference Speaker" email

Once the data is structured, the AI becomes a copywriter. The attacker feeds the OSINT dossier into a large language model. The goal is not to create a generic template, but a bespoke lure that references the specific events found in Step 1.

In the screenshot above, notice how it drafted a 'Conference Lure' that explicitly references the target's 'AI-Powered Spearphishing' panel."

Because the context is real, the victim’s brain shortcuts the “is this suspicious?” step. It feels like a normal part of their day.

Step 3: Deployment & Cloning

Fake Login Page

The final stage is the close: turning a convincing email into a credential harvesting event. The image above shows the final result: a pixel-perfect clone of the APWG conference portal.

Here, AI acts like a junior front-end engineer. It can take the visual identity of a legitimate brand and generate a near-pixel-perfect clone in seconds.

  • The Look: It mimics the color scheme and formatting of the legitimate event website.
  • The Content: It auto-populated the target’s real headshot and session title.
  • The Trap: It features a "Confirm Attendance" button—the mechanism used to harvest credentials.

By sitting between the user and the legitimate service, the AI is effectively automating a man in the middle attack on human trust capturing credentials in real-time. To the user, entering their login details here feels like a standard administrative task.

Why Your Firewall Misses It: Semantic Fuzzing

So why didn’t the Secure Email Gateway (SEG) or legacy filters stop this?

A big part of the answer is semantic fuzzing.

Legacy email security tools were designed to prevent viruses and malicious code by looking for known signatures and attachments. They struggle to detect intent.

AI agents know this. They can automatically rephrase messages so the intent stays the same while the surface language changes. Instead of “Reset your password,” the AI writes “Validate your security profile.”

The Takeaway

The barrier to “state-sponsored-level” sophistication has effectively dropped to zero. As these GIFs demonstrate, automated pipelines allow low-skill attackers to:

  1. Build high-quality OSINT dossiers in seconds.
  2. Generate context-perfect lures that feel routine.
  3. Clone your external experiences with unnerving accuracy.

They are faster than you. They know your public footprint. And they are systematically testing what gets through your controls.

Next Up: Who is in the Crosshairs?

Now that you understand the mechanics of the pipeline, the next question is: who is it pointing at?

In Part 3, we open the “Bullseye Report” to reveal the specific job titles—from the C-Suite to Finance—that are absorbing the brunt of these AI-powered attacks, and why attackers have shifted to a "Criminal ABM" strategy.

Inspect the Obfuscation Mechanics

"Semantic Fuzzing" and HTML obfuscation (like the <tt> tag injection techniques attackers use) are specifically designed to break rule-based gateways.

See the mechanics. In our demo, we deconstruct these evasion techniques at the code level. We will show you exactly how attackers hide their payload and how our engine parses the raw HTML to recover the true intent.

[ See the Technical Breakdown ]

Don’t Miss the Next Big Threat
Subscribe today to receive updates on the newest cyberattacks, product innovations, and best practices for protecting your organization.

Subscribe

Success! We’ll be in touch soon.
Something went wrong while submitting.
Related topic articles
Read All Articles
Email Security
What Security Leaders Should Actually Measure with AI in Email
If “AI for email security” is on your roadmap, the real question isn’t what to buy—it’s what to measure.
Email Security
Ai
Designing Email AI Agents Analysts Actually Trust: Detect → Explain → Act
Everybody sells “AI for email security.” The difference between hype and value comes down to three words: Detect, Explain, Act.
Email Security
Ai
AI Email Security: Why ROI Shows Up Here First
AI agents are finally delivering real security outcomes. The first place that shows up? Your inbox.