Home
Sales ExcellenceAboutBenchmarkBlogContact
Free Rep Gap Analysis →
European Benchmark Report 2026

Here's what European B2B SaaS & AI sales reps don't do on calls

The first benchmark built exclusively on European call data. Scored, not aggregated. 1,400+ discovery and demo calls across 32 B2B SaaS companies in Benelux and DACH, assessed over seven years.

1,400+
Scored call observations
32
European B2B SaaS companies
22
Criteria tracked per call
7 yrs
Benchmark data 2019 to 2026
If you read nothing else

Differentiation is created in discovery, confirmed in the demo, and protected through deal management. European B2B SaaS sales teams are failing at all three.

The bad news

What your reps are probably not doing on calls

Six behaviours show up as systematic gaps across almost every team in the dataset. Each scores below 3 out of 10 on average. If your team doesn't explicitly train for these, you're leaking revenue.

1/10

Your reps don't use customer examples in demos

The most effective way to make value concrete is used in only 10% of demos. Reps show features instead of proof. Buyers walk away informed, not persuaded.

Demo
2/10

Your reps never establish what the problem costs

75% of calls score zero on business impact. Without a euro figure attached to the pain, buyers have no internal case to make. Pricing becomes the only differentiator.

Discovery
2/10

Nobody recaps what the buyer just said

Active listening and summarising scores 0.22 out of 1. Reps hear, but they don't confirm. Buyers leave unsure whether they've been understood, and internal champions lose their script.

Discovery
3/10

No rep shifts how the buyer thinks about the problem

The single behaviour that most separates winners from the rest. Introducing new information that makes the buyer reconsider their own situation. Almost nobody does it.

Discovery
4/10

Demos ignore what was learned in discovery

Discovery-to-demo alignment scores 0.39. Reps ask the right questions, then present the same generic deck to everyone. The tailoring never makes it into the room.

Demo
4/10

Reps show features but never confirm why they matter

Anchoring, the act of tying a feature back to a specific pain the buyer mentioned, happens in one out of three demos. The rest are monologues.

Demo
3 more behaviours scored below 5/10

Get the full European Benchmark 2026 to see what else your reps are missing

Includes the 3 demo & deal-management failures, the full 22-criteria scoring methodology, and the year-over-year trend data.

Unlock the full report →
The good news

Here's what's actually working

A few behaviours show up consistently across the dataset. If your team is doing these well, protect them. If not, they're the easiest wins.

9/10

Almost everyone opens calls well

Warm opening, agenda setting, setting expectations. Table stakes, but consistently executed. The foundation is in place.

8/10

Reps close calls with a clear next action

Concrete next step, owner, purpose, timing. Deal management starts here, and it's being done reasonably well. This is what holds deals together.

8/10

Situation understanding is solid

Reps know how to map out team structure, tech stack, and current process. What they don't do is turn that map into something the buyer didn't already know.

8/10

Pricing conversations happen cleanly

When it's time to talk money, reps hold the line reasonably well. The problem isn't in how pricing is presented. It's that value was never anchored before pricing came up.

Get the full report

All 22 criteria, scored and benchmarked

The full report breaks down every criterion with averages, distributions, example quotes from real calls, and how to close each gap. Free to download.

Full Scorecard

22 Criteria Benchmarked

Business Impact2/10
Buyer's Vision3/10
Customer Stories1/10
Active Listening2/10
Anchoring4/10
Disc-Demo Alignment4/10
Future State4/10
Timeline & Urgency5/10
Pain-led Prioritisation5/10
Decision-makers5/10
Demo Agenda6/10
Competitive Landscape6/10
Decision Process6/10
Time & Availability6/10
Pain Exploration6/10
Next Steps Agreed7/10
Full report →

European B2B SaaS & AI Sales Benchmark 2026

Enter your details and I'll send the full PDF report straight to your inbox. 22 pages, all criteria scored and explained, with real call excerpts.

Free. Unsubscribe anytime. No cold sales calls.
Why this benchmark matters

Most sales benchmarks are irrelevant for European teams

The reports you usually download are built on aggregate US data and lumped across industries. This one isn't.

01

European buyers behave differently

Longer evaluation cycles. More stakeholders. Lower tolerance for American-style hard closing. The tactics that win in the US often kill deals in Benelux and DACH.

02

SaaS-specific, not aggregate

This benchmark is B2B SaaS and AI only. No mixed samples with field sales or retail. The behaviours measured are the ones that actually separate SaaS winners from the rest.

03

Scored by hand, not by AI

Every call in the sample was reviewed and scored manually. Context, tone, and judgement calls that transcription-based tools miss. Slower to build, far more accurate.

91%
Calls in Benelux & DACH
B2B SaaS
& AI companies only
€2M–€50M
ARR range of companies
20–300
FTE team size range
FAQ

Questions about the benchmark

Everything you might want to know before downloading.

What is the methodology behind the scores?
Every call is scored on 22 behavioural criteria, using a simple three-point scale: 0 = not achieved, 0.5 = somewhat achieved, 1 = fully achieved. The criteria mix a wide set of best practices that goes far beyond SPICED or MEDDIC, while incorporating those discovery frameworks. The averages you see are across the full dataset.
Why can't I just rely on AI scoring via Gong, Modjo and the like?
The logic of these tools is mostly based on a simple prompt behind each criteria. Ie. for pain questions: Has the Pain been discussed in the call? Now the AI runs through the transcript and tries to find evidence to support the question, it tries to please. Even pains that are presented during a presentation by the seller are now counted as pain discussions. The devil is in the detail, which is why I took all my call scoring before AI and my manual call scorings post AI ('23+) to build this benchmark. If you are using AI I urge you to ask each prompt to provide the evidence for each score, as it will show you the shaky grounds that LLMs build their scoring on.
How is this different from benchmarks from Gong, Salesforce, or HubSpot?
Those benchmarks are built on US data and aggregated across industries. This one is built on European B2B SaaS & AI calls specifically, in Benelux and DACH. What works in Silicon Valley doesn't always work in Amsterdam or Frankfurt, and the buying culture is materially different.
Where does the data come from?
From 7 years of hands-on analysis inside European B2B SaaS companies. Every call in the dataset was scored manually by me, not auto-generated. The sample covers 32 companies across Benelux and DACH, from Series A through scale-up stage.
Is the full report really free?
Yes. Enter your work email and you get the full PDF in your inbox. No sales call, no obligations. If you want to go deeper after reading, the Rep Gap Analysis is where to start.
Will I be added to a marketing list?
You'll receive the report and occasionally an insight from the benchmark or an article I've written. Unsubscribe with one click at any time. No spam, no cold sales emails.

Let's find what's holding your team back

Send me one call from your top rep and one from a mediocre performer. I'll show you the exact gap and how to close it.