How to Stop “Drive-By Training Requests” Without Becoming the Department of “No”
“Can you quickly make a training for this?” is one of the most expensive sentences in enterprise L&D.
Drive-by requests usually show up when someone is under pressure and needs a fast fix. The issue is that training is often not the real fix—and even when it is, the fastest version tends to be the least effective. You end up shipping something rushed, unclear, or mis-scoped… then paying for it later in rework, confusion, and a second wave of requests.
The goal isn’t to shut people down. It’s to catch drive-bys early, redirect them into clarity, and offer a solution that’s actually proportional to the problem—without creating friction or damaging relationships.
Why drive-bys happen
Drive-by requests are a symptom—not a personality issue. They typically come from one (or more) of these root causes:
1) Nobody owns enablement end-to-end
In many organizations, the business owns performance goals, managers own outcomes, and L&D owns “training.” When something goes wrong, the quickest handoff is: “Let’s make training.” It feels like progress, even if it’s not the right lever.
2) The process isn’t defined (or isn’t trusted)
If a workflow lives in someone’s head or a messy deck from two years ago, people reach for training as a way to force standardization.
3) Performance problems get mislabeled as training problems
A tool is confusing. The workflow changed. Incentives are misaligned. Managers aren’t reinforcing. But instead of addressing the system, the organization asks for a course—because it’s the easiest request to make.
4) “Training” is used as a protective action
In compliance, safety, and regulated environments, training sometimes functions as evidence: “We trained employees.” That’s real. But it requires an approach that prioritizes auditability and comprehension—not rushed content.
5) Stakeholders don’t know what’s involved
Many leaders think training is like sending an email: quick, easy, instant. Unless you teach the organization what “good” looks like—and how long it realistically takes—drive-bys keep coming.
The two-step response that keeps relationships strong
If you respond the wrong way to a drive-by, you create either:
- resentment (“L&D is blocking us”), or
- a chaotic yes (“we’ll try”), followed by rework and frustration.
A better approach is a consistent two-step response.
Step 1: Acknowledge + redirect to clarity
Your first job is to slow the request down by 60 seconds—not weeks. You do that by asking for clarity in a way that feels helpful.
Here are a few copy/paste options depending on tone.
Option A (friendly, direct)
Happy to help. To make sure we solve the right problem, can you share what changed and what people are doing wrong today?
Option B (for urgent stakeholders)
Yes—we can support. Quick question so we don’t build the wrong thing: what changed, what errors are happening, and what’s the impact if it doesn’t change?
Option C (executive-level, outcome-driven)
I’m aligned. Before we decide on training, what behavior needs to change and how will we measure that it changed?
This does something subtle but powerful: it shifts the conversation from “make content” to “solve the problem.”
Step 2: Offer three solution paths (not just “training”)
Most drive-by requests are really requests for enablement. Give stakeholders a professional menu of solutions—each with a clear purpose and time expectation.
Path 1: Performance support (fastest)
What it is: checklist, job aid, quick reference guide, annotated screenshots, “do this / don’t do this” SOP card
Best for: stable workflows, reminders, common misses, “in-the-moment” tasks
Typical timeline: 24–72 hours (depending on complexity)
Path 2: Microlearning (targeted behavior change)
What it is: 5–8 minutes, one decision point, one scenario, one knowledge check
Best for: frequent mistakes, judgment calls, “people know the rule but don’t apply it”
Typical timeline: 1–2 weeks
Path 3: Full module (only when needed)
What it is: multi-step instruction + practice + assessment + audit trail
Best for: compliance/safety, high-risk procedures, broad audiences, global consistency needs
Typical timeline: 3–6+ weeks (depending on reviews and constraints)
This reframes you as a partner who knows how to solve problems—not a gatekeeper who says no.
The “Is this training?” filter (use it every time)
Before anything enters build, run the request through a short filter. This is the difference between shipping fast and shipping twice.
Ask:
1) What’s the consequence if nothing changes?
If the consequence is unclear, the request isn’t ready.
2) What exactly is the error or failure point?
“People aren’t doing it” is not a failure point. “They skip step 4 when under time pressure” is.
3) Is the problem knowledge, skill, motivation, or process/tools?
- Knowledge: they don’t know the rule
- Skill: they can’t do it reliably
- Motivation: they don’t believe it matters / incentives conflict
- Process/tools: the workflow or UI causes failure
4) What do top performers do differently?
If top performers succeed with a checklist, start there.
5) Can someone do the task with a checklist in the moment?
If yes, performance support is the correct first solution.
6) How will we know it worked?
Pick a metric (even a proxy): fewer errors, fewer tickets, fewer reworks, higher first-pass QA, faster time-to-competency.
If answers are vague, don’t “build anyway.” Move it to discovery.
Access strategic insights and innovations redefining L&D. From emerging technologies to proven methodologies, LAAS helps you anticipate change and build learning programs that drive real business impact.

The best way to say “not training” without friction
Avoid blunt refusals. Instead, keep momentum with a phased approach that sounds practical—not defensive.
Use language like:
This looks more like a workflow issue than a training gap. The fastest fix is a one-page checklist + manager coaching prompts. If we still see errors after two weeks, we can build a short micro-module to reinforce the decision points.
This works because it:
- acknowledges urgency
- offers a solution immediately
- preserves training as an option without committing prematurely
Template 1: When someone asks for “quick training”
Subject: Re: Quick training request
Hi [Name],
Happy to help. To make sure we solve the right issue (and don’t build the wrong thing), can you confirm:
- What changed?
- What are people doing wrong today?
- What’s the consequence if it doesn’t change?
- Is there a deadline driver (launch/audit/incidents), or is this a preference?
If this is urgent, the fastest path is usually a checklist + quick reference guide first. We can ship that quickly and escalate to microlearning if the errors continue.
Thanks,
[Name]
Template 2: When you suspect it’s not training
Subject: Re: Enablement support
Hi [Name],
Thanks for flagging this. Based on what I’m hearing, this may be less about training and more about workflow clarity. The quickest fix is:
- A one-page checklist / “do this, don’t do this” guide
- Manager coaching prompts for reinforcement
If we still see the same error pattern after [2 weeks], we can build a short scenario-based microlearning to lock in the decision points.
What to standardize so drive-bys shrink over time
Drive-bys never fully disappear. But you can make them cheap to handle.
Standardize these three things:
1) One shared intake link
So requests don’t enter through 12 channels.
2) A decision tree (“job aid vs microlearning vs module”)
So stakeholders learn that “training” has forms—and the lightest effective option is usually best.
3) Standard turnaround expectations by lane
If people know “job aid = days, microlearning = 1–2 weeks, module = longer,” requests become more realistic.
Over time, you’ll notice a shift: stakeholders start asking for the right asset type up front.
Move beyond completions. Build learning that changes behavior in the moments that matter—so impact is visible to leaders.

Common failure modes (and fixes)
Failure: You still say yes to everything “just to be helpful.”
Fix: yes to the problem, not to the format. Always offer the three paths.
Failure: Stakeholders treat the solution menu like negotiation.
Fix: anchor the choice to risk and reach: “Because this is high risk, we need assessment and sign-off. Because it’s low risk, we can ship a job aid fast.”
Failure: You ship a job aid, but nothing changes.
Fix: add reinforcement: manager prompts + quick comms + a 2-week metric check.
Failure: People keep bypassing intake.
Fix: “If it’s not in intake, it’s not scheduled.” Be consistent.
What to measure (so you can prove progress)
Track these monthly:
- Number of drive-by requests received (estimate is fine)
- % converted to performance support vs training
- average time to deliver performance support assets
- number of escalations related to “status”
- error metrics tied to the request (tickets, incidents, rework rates)
Even basic tracking helps you show the organization: “We’re solving issues faster and with less waste.”
A simple rollout plan (so this becomes real)
Week 1
- Publish your solution menu (job aid / microlearning / module)
- Create the 5-question “Is this training?” filter
- Draft your copy/paste response templates
Week 2
- Use the two-step response on every drive-by
- Convert at least 50% of requests to performance support where appropriate
- Share the first few “fast wins” internally (quietly, no fanfare)
Within a month, drive-bys feel less disruptive—because you’ve created a consistent, professional response pattern.


