Sales training is one of the most over-purchased and under-delivered interventions in B2B SaaS. Every year, companies fly in a trainer, run a two-day workshop, hand out a workbook, and then watch conversion rates drift back to exactly where they were three weeks later. I have seen this pattern repeat across dozens of teams.
The problem is not the training content. It is the conditions around the training. A sales rep who does not see the relevance of what you are teaching will not change their behaviour. A manager who cannot reinforce new habits will watch them decay in real time. And a training program that does not start from actual evidence — real calls, real pipeline data, real conversion gaps — is essentially guessing at what to fix.
This article is for sales leaders, CROs, and enablement managers at B2B SaaS companies who want to know what a sales training program looks like when it actually works. The four phases below are what I apply in every sales excellence engagement.
1 Create the Conditions for Learning Before the Training Starts
The single biggest reason sales training fails is that reps were never bought in to begin with.
Think about what training typically feels like from the rep's perspective. A decision was made above them, someone external arrives with a slide deck, and they spend two days being told that the way they have been doing their job is wrong. No wonder they sit with their arms crossed and their laptop open. Even the reps who engage in the room often revert within a week, because the training never connected to anything they personally cared about.
Behaviour change requires motivation. Motivation requires relevance. And relevance requires that you actually ask the rep what they care about before you start teaching them.
The work that happens before training begins is more important than the training itself. It involves three specific inputs.
Baseline KPI and Call Review
Before any rep sits in a training session, you need to know their actual performance numbers — not what their manager thinks, and not what the CRM dashboard shows at face value. I mean a genuine funnel analysis: SAL-to-SQL conversion rate, demo-to-close rate, average deal size by segment, and cycle length. Alongside that, you need to listen to a sample of their actual calls — at least five or six across different deal stages.
This is not optional. Training that is not grounded in evidence is guesswork. When I audit a team, I typically find that the most commonly cited problem ("our reps can't close") is a symptom, not a cause. In most cases the real issue is discovery — reps are entering demos without a clear understanding of the business problem, which means their demos are generic, which means the prospect has no urgency, which means the close fails. The call review is what reveals this chain.
Personal Interview and Motivation Mapping
Once you have the data, sit with each rep individually — not as a performance review, but as a genuine conversation. The questions that matter: What does career progression look like for them over the next 12 months? Where do they feel least confident in their sales process? What do they think is holding their numbers back? What kind of feedback have they found most useful in the past?
Two things happen in this conversation. First, you get information that no dashboard will ever show you. Second, and more importantly, the rep feels heard. That conversation is the first act of buy-in. When training later references something they told you — "you mentioned you struggle to get second meetings, so today we are going to focus on exactly that" — the relevance is no longer abstract.
Customer Feedback as a Mirror
The most underused source of training content is the voice of the customer. A structured set of post-sale and post-loss interviews — asking buyers what they valued, what made them hesitant, and what the rep could have done differently — gives you external validation of what your internal data shows. When a rep hears that three of their lost prospects mentioned "the demo felt generic," that lands differently than a manager saying the same thing. It is harder to dismiss feedback when it comes directly from the people they were trying to sell to.
2 Train on the Real Skill Gap, Not the Assumed One
Once the diagnostic is complete, you know what to train. The temptation at this point — especially when working with a team of eight or ten reps — is to build one training program that covers everything. Resist it.
The average sales training program tries to cover discovery, demo, objection handling, negotiation, prospecting, and pipeline management in two days. It covers everything shallowly and fixes nothing specifically. I describe this as the training equivalent of the six-pack problem: you are optimising all your gym members for the same goal when they came in with entirely different needs.
A training program that targets the actual skill gap with actual evidence will do more in a focused half-day than a generic two-day workshop.
Ground Every Session in Real Evidence
Every training session should begin with evidence from the team's own calls and pipeline data — not hypothetical scenarios, not case studies from other companies. When you play back a clip of a rep asking a leading question in discovery and then show how the deal stalled at demo stage, the learning is immediate and undeniable. The rep cannot argue with their own voice.
This requires preparation. In my engagements, I will typically tag 20–30 call moments — examples of strong and weak discovery questions, moments where urgency was built or lost, demos that connected to business outcomes versus those that ran through features — and use these as the teaching material. The clips are anonymised when shared with the group, but each rep knows their own calls are in the mix.
Concrete Exercises and Immediate Application
Every concept introduced in training must have a corresponding exercise that the rep applies within the same session. Not a role play that happens in two weeks. Now, in the room. If you are teaching how to anchor a business problem in a discovery call, reps practice that call opening with a partner using a real prospect from their pipeline before the session ends.
The research on skill retention is consistent on this point. David A. Kolb's experiential learning model — published in Experiential Learning: Experience as the Source of Learning and Development — shows that adults retain skills significantly better when they cycle through concrete experience, reflection, conceptualisation, and active experimentation within the same learning event, rather than spacing them across sessions with no application in between.
In practice, this means every training module ends with one concrete takeaway the rep commits to applying on their next call, and a mechanism to verify it happened — whether that is a call recording, a CRM note, or a brief check-in with their manager.
3 Focused Coaching on a Maximum of Two KPIs at Once
Training creates awareness. Coaching creates habit. These are not the same thing, and confusing them is one of the most common mistakes sales managers make.
Jason Jordan and Michelle Vazzana, in Cracking the Sales Management Code, make a point that has stayed with me: 83% of the KPIs that sales managers track are unmanageable — they are outcome metrics that neither the manager nor the rep can directly influence through their daily behaviour. Revenue is the classic example. You cannot coach someone to more revenue. You can coach them to better discovery questions, which improve demo quality, which increase close rates, which eventually produce more revenue. The chain matters.
Coaching works when it is specific, observable, and connected to one or two behaviours at a time. Anything broader than that and you are not coaching — you are reviewing.
The Maximum-Two-KPI Rule
After training, each rep should have at most two coaching focus areas at any given time. Not five. Not the full scorecard. Two. In my experience, the sweet spot is one conversion rate target — for example, improving SAL-to-SQL from 55% to 70% — and one behavioural target that drives it, such as asking at least two business impact questions in every discovery call before moving to solution.
This constraint is harder than it sounds. Sales managers naturally want to address everything they see. But coaching on five things simultaneously is the same as coaching on nothing. The rep cannot hold five parallel improvement tracks in their head while also running a live call.
Variety in Coaching Methods
Good coaching does not mean one-on-ones. The COMPASS framework I use for call coaching — Continuous, Organised, Measured, Practical, Appealing, Safe, and Specific — identifies seven conditions that coaching must meet to produce behaviour change. "Appealing" is the one most managers underestimate.
Variety is what keeps coaching from feeling like surveillance. The methods that work best in combination are:
- Post-call debrief: the rep identifies what they would do differently before the manager offers any view. Self-diagnosis builds ownership.
- AI role-play simulation: reps practice specific scenarios — a difficult objection, a price conversation, a multi-stakeholder discovery call — without the stakes of a live deal.
- Call shadowing: the manager joins a live call silently and provides written feedback within two hours. The proximity to the actual event matters — feedback given three days later loses most of its impact.
- Group listening sessions: once or twice a month, the team listens to two or three anonymised call moments together and discusses what worked and what did not. These build shared language and normalise feedback.
The coaching schedule should be consistent — weekly at minimum for reps in active development — and voluntary sessions should be tracked for attendance. If your reps are not showing up to voluntary coaching, that is a leading indicator that the sessions are not valuable enough, not that the reps are uncommitted.
4 Uplevel the Manager to Own the Programme
This is the phase that most external training engagements skip entirely — and it is the reason most improvements evaporate within a month of the trainer leaving.
The manager is the multiplier. A well-trained team with a manager who cannot reinforce new behaviours will regress to baseline. A team with a manager who understands the skill gaps, has language for the coaching conversations, and tracks the right KPIs will keep improving long after any formal training programme ends.
Teach the Manager to Coach, Not Just Observe
Most sales managers were promoted because they were good individual contributors. They know how to sell. They do not necessarily know how to coach others to sell. These are different skills. A great sales manager, like a great football coach, does not simply watch their team play and shout instructions from the sideline. They run specific drills. They build individual development plans. They debrief specific moments, not general impressions.
The manager must be able to run the post-call debrief themselves, score calls against the same criteria used during training, and conduct a focused one-on-one that connects a rep's behaviour in a specific call to the KPI they are trying to move.
Give Managers the Right KPI Dashboard
Managers need to track conversion rates by rep and by stage, not just aggregate team revenue. A rep whose SAL-to-SQL rate is improving but whose demo-to-close is flat is telling you something very specific about where to focus next. A rep whose activity volume is high but whose conversion at every stage is stagnant is telling you something different. Revenue tells you none of this.
The dashboard should also include coaching activity itself: number of call reviews completed per rep per month, frequency of one-on-ones, and whether agreed commitments from previous sessions were followed through. Coaching that is not tracked tends not to happen.
Build the Handover Explicitly
At the end of any training engagement, there should be a formal handover: the manager receives the individual development summaries for each rep, the KPI benchmarks from before and after training, the call library of tagged examples used in sessions, and a six-week coaching plan with specific checkpoints. This is not a formality. It is the mechanism by which the training becomes embedded in the team's normal operating rhythm rather than a memory of a workshop they attended.
Comparing Training Approaches: What Actually Works
| Approach | Typical outcome | Why it fails or works |
|---|---|---|
| Generic 2-day workshop, no diagnostic | Temporary enthusiasm, rapid regression | No buy-in, not grounded in evidence, no follow-through |
| Online modules, self-paced | Low completion, no behaviour change | No accountability, no application, no feedback loop |
| Manager-led coaching without training | Inconsistent, depends on manager skill | Manager lacks tools, language, and structure |
| Diagnostic → tailored training → focused coaching → manager handover | Measurable KPI improvement within 6–8 weeks | Addresses root cause, builds habit, sustainable |
Frequently Asked Questions
How long does an effective sales training program take?
How do you measure whether sales training actually worked?
How many skills should you focus on in a sales training program?
What is the difference between sales training and sales coaching?
What makes sales managers better coaches after a training programme?
The Outcome You Are Actually Building Towards
The goal of an effective sales training program is not a better workshop. It is a measurable shift in conversion rates that is sustainable because the manager can maintain it. In my experience working with B2B SaaS teams across Benelux and DACH, the difference between a program that moves the needle and one that does not comes down to those four phases. Not the quality of the slides. Not the charisma of the trainer. Whether the rep was bought in, whether the content was grounded in their actual calls, whether coaching was specific and focused, and whether the manager was equipped to continue the work.
Depending on where you are and what your team needs right now, there are two ways I can help — both built on the same diagnostic-first, evidence-grounded approach described in this article.
Two ways to work together
Both options start with your team's actual data — calls, CRM, and KPIs — not a generic programme.
- Every session follows the SETUP method: real evidence first, theory second, live application third
- Built from your call recordings and KPI data — not off-the-shelf slides
- Start with one program and add more based on what moves
- Faster to deploy — good fit when you have a clear, specific gap
- Covers all four phases: diagnostic, tailored training, focused coaching, manager handover
- Max 2 KPIs coached at a time, tracked across a 6–8 week cycle
- Includes the manager uplevel — so results stick after the engagement ends
- Right fit when you want lasting behaviour change, not a one-off workshop