TL;DR
- AI cold email personalisation generates first lines from real prospect data — LinkedIn posts, company news, job changes — not merge fields
- Claygent (inside Clay) processes 500+ contacts per hour at $0.02–$0.05 per personalised line vs. $3–$5 for a human researcher
- AI-personalised sequences produce 1.5–2x the reply rate of template-personalised emails against the same list
- If Claygent can't find specific data, it should return "SKIP" — generic AI personalisation is worse than no personalisation
- The first line gets the email read. The rest of the email still has to be well-written
Contents
- What AI Cold Email Personalisation Actually Means in 2026
- Why Template Personalisation Stopped Working
- How Claygent AI Personalisation Works in Practice
- Building Your AI Personalisation Workflow: Step by Step
- Writing the AI Prompt That Produces Good First Lines
- AI Personalisation at Scale: Expected Results and Limits
- FAQ: AI Cold Email Personalisation for B2B
AI cold email personalisation is the practice of using AI to generate context-specific email content — first lines, opening hooks, and body references — based on real data about each prospect, not template variables.
This is not {{first_name}} and {{company_name}} swaps. That's merge field
personalisation. It worked in 2020. In 2026, recipients can spot mail-merge in the first sentence. They
delete before reading sentence two.
We generate AI-personalised first lines for every contact in every campaign. Claygent reads a prospect's LinkedIn activity, company news, recent job postings, or product launches and writes a sentence that references something specific about their situation. The throughput is 500+ contacts per hour. Reply rates on AI-personalised sequences consistently run higher than template-based sends — the gap is typically 1.5–2x.
Here's how to build the workflow.
What AI Cold Email Personalisation Actually Means in 2026
Let's separate three levels of personalisation:
Level 1: Merge fields. "Hi {{first_name}}, I noticed {{company_name}} is growing. We help companies like yours..."
This is what most cold email still looks like. The prospect knows it's automated. It doesn't reference anything specific about them.
Level 2: Segment-based personalisation. "Hi Sarah, as a VP Sales at a Series A SaaS company, you're probably thinking about pipeline predictability..."
Better — it uses ICP data to tailor the message. But it's still the same email sent to every VP Sales at a Series A company. The prospect can tell.
Level 3: AI-generated contextual personalisation. "Sarah — saw your LinkedIn post about rebuilding the outbound function after your SDR team churned. That's exactly the problem signal-based systems are designed to solve."
This references something specific about the individual. The prospect can't tell it was AI-generated because it reads like someone actually looked at their profile. Because an AI did, at scale.
Level 3 is what we're talking about. It's what Claygent produces.
Why Template Personalisation Stopped Working
Two things killed it:
1. Everyone is doing it. The average B2B decision-maker gets 30–50 cold emails per week. Most use the same personalisation playbook: mention their company name, reference their industry, maybe pull their title. When every email uses the same trick, the trick stops being a differentiator.
2. AI made bad email cheap. GPT-powered email tools let anyone generate 10,000 "personalised" cold emails for the price of an API call. Volume went up. Quality went down. Inbox providers got better at detecting bulk sends. Prospects got better at detecting AI-generated content that says something about their "industry" but nothing about them specifically.
The bar for "personalised" moved from "you used my name" to "you referenced something I said or did recently that no template could produce." That's the bar AI-contextual personalisation clears.
How Claygent AI Personalisation Works in Practice
Claygent is Clay's built-in AI agent. It reads data sources for each contact and generates text based on a prompt you define.
Inputs Claygent can read:
- LinkedIn profile (current role, bio, recent posts)
- Company website and news (press releases, product launches, blog posts)
- Crunchbase data (funding rounds, investor info)
- Job postings (hiring signals, open roles)
- G2 or Capterra reviews (product sentiment)
- Any enrichment data in your Clay table
What Claygent produces: A personalised opening line (or paragraph) that references something specific about the prospect. The output quality depends entirely on the prompt — a vague prompt produces vague output. A specific prompt with clear constraints produces output that reads like a human researcher wrote it.
Processing speed: 500+ contacts per hour, depending on the complexity of the prompt and the data sources being read.
Building Your AI Personalisation Workflow: Step by Step
Step 1: Enrich the Clay table with personalisation data
Before Claygent can write a personalised line, it needs material to personalise with. Add these enrichment columns to your Clay table:
- Recent LinkedIn post content — pull the prospect's last 1–3 LinkedIn posts
- Company news — recent press mentions, product launches, funding announcements
- Job postings — what roles is the company hiring for right now?
- LinkedIn bio/summary — the prospect's self-description
Not every contact will have data in every column. That's fine. Claygent works with whatever is available and degrades gracefully.
Step 2: Write the Claygent prompt
This is where output quality is determined. See the next section for the full prompt template.
Step 3: Add a Claygent column to the Clay table
Create a new column using the Claygent integration. Set the prompt. Point it to the data columns (LinkedIn posts, company news, etc.). Run it on all rows.
Step 4: Quality check a sample
Check 20–30 outputs manually. Look for:
- Is it specific? Does it reference something only this person's profile would produce?
- Is it under 25 words? Shorter first lines perform better.
- Does it avoid generic AI phrases — nothing like "leverage," "seamless," "impressive," or "exciting"?
- Does it sound human or AI-generated? If you can tell it's AI, the prospect can tell faster.
Adjust the prompt based on what you find.
Step 5: Merge into email template and export
The Claygent output becomes a column you merge into your cold email template:
{{claygent_first_line}}
[Rest of your email — value prop, credibility, CTA]
Export the contacts with their personalised first lines to Smartlead for sequencing.
Writing the AI Prompt That Produces Good First Lines
The prompt is the most important variable. Here's the template we use:
Read this person's LinkedIn profile, recent LinkedIn posts, and their
company's recent news. Write one opening sentence for a cold email.
Rules:
- Maximum 25 words
- Reference something specific: a LinkedIn post topic, a company
milestone, a hiring pattern, or a role change
- Do not use their first name in the sentence
- Do not use phrases like "I noticed" or "I came across"
- Do not use "impressive" or "exciting" or "congrats"
- Write in a casual, direct tone — like a peer, not a salesperson
- If no specific data is available, write "SKIP" instead of a generic line
Good example: "Your post about replacing SDR teams with automation
landed — we're seeing the same shift across our SaaS clients."
Bad example: "I noticed your impressive growth at Acme Corp and wanted
to reach out about an exciting opportunity."
Why the "SKIP" rule matters: If Claygent doesn't have enough data to write something specific, it should return "SKIP" rather than generating a generic line. Generic AI personalisation is worse than no personalisation — it tells the prospect you tried to fake specificity and couldn't. For contacts that return "SKIP," send a version of the email without a personalised first line. An honest cold email outperforms a pretend-personalised one.
AI Personalisation at Scale: Expected Results and Limits
Results
| Metric | AI-personalised email | Template-personalised email |
|---|---|---|
| Reply rate | 4–8% (signal-triggered) | 2–4% (same list, templates) |
| Prospect response quality | Higher — more qualified replies | Lower — more "not interested" |
| Time per 500 contacts | 1–2 hours (automated) | 8–15 hours (manual research) |
| Cost per personalised line | $0.02–$0.05 each | $3–$5 each (human writer) |
Limits
Sparse LinkedIn profiles. If a prospect hasn't posted on LinkedIn and their company has no recent news, Claygent has nothing to work with. The "SKIP" rule handles this — roughly 5–10% of contacts will return SKIP on a typical SaaS ICP list.
Prompt sensitivity. Small changes to the prompt produce large changes in output quality. You need to test and iterate the prompt on 20–30 contacts before running it at scale.
Occasional misses. Claygent occasionally generates a line that misreads the context — references a post from 2 years ago, misinterprets a company announcement, or generates something awkward. Spot-checking a sample catches these. Plan for a 3–5% error rate.
It doesn't replace good email copy. The first line gets the email opened and read. The rest of the email — value prop, credibility signal, CTA — still needs to be well-written. AI personalisation is a turret on a tank. If the tank is bad, the turret doesn't save it.
Cold Email Swipe File — Download Free
10 cold email templates with AI-personalised first lines, broken down by ICP segment and funnel stage. Includes the Claygent prompt templates we use and performance benchmarks.
FAQ: AI Cold Email Personalisation for B2B
What is AI cold email personalisation?
AI cold email personalisation uses AI tools like Claygent (inside Clay) to generate email opening lines based on real prospect data — LinkedIn posts, company news, hiring signals, funding events. Unlike template personalisation (merge fields), AI personalisation produces content specific to the individual that no template could generate. It's the difference between "Hi {{name}}, I see you work at {{company}}" and a sentence referencing a prospect's actual LinkedIn post from last week.
How much does AI email personalisation cost per contact?
Using Claygent inside Clay, the cost is roughly $0.02–$0.05 per personalised line at campaign volumes. Compare that to $3–$5 per contact for a human researcher writing first lines manually. At 2,000 contacts per month, that's $40–$100 for AI personalisation vs. $6,000–$10,000 for human personalisation.
Does AI personalisation improve cold email reply rates?
Yes. In our campaign data, AI-personalised emails produce 1.5–2x the reply rate of template-personalised emails against the same list. The combination of signal-based timing plus AI personalisation produces the highest rates — 4–8% reply rates on triggered sends.
Can prospects tell the email was personalised by AI?
Not when it's done correctly. The key is referencing specific, verifiable details about the prospect — a recent LinkedIn post, a company announcement, a hiring pattern. Generic AI personalisation ("I noticed your impressive growth") is detectable. Specific AI personalisation that references something real is indistinguishable from a human researcher's output.