You've been asking the wrong question about why
- Yaniv Corem

- Mar 24
- 3 min read
THE OPS BRIEF — Issue #5
A weekly dose of operational intelligence for program managers who prefer clarity over clutter.
THIS WEEK'S THOUGHT
Most innovation programs spend enormous energy trying to inspire people. Workshops. Vision sessions. Purpose decks. And the resistance keeps showing up anyway. Because inspiration is rarely the bottleneck. Being heard is.
🎙️ THIS WEEK ON THE SCHOOL OF INNOVATION
"The Innovation Coach Knows That Most People Are Asking the Wrong Question About Why" — with Suzanne Vos
Suzanne Vos is an innovation coach embedded inside ING, one of Europe's largest financial institutions. Her job is to help people launch and scale new ideas inside an organization that — like most large ones — is structurally built to say no.
What makes her perspective unusual is that she doesn't try to inspire people into taking risks. She listens first. She asks what's actually blocking them. And she's built a model for moving ideas through organizations that treats resistance as information rather than opposition. If your program spends more time motivating than listening, this episode will reframe how you're thinking about it.
🔗 Listen to the episode ⏱️ 6 min read / full episode
🛠️ THIS WEEK'S TOOL
Theory of Change Builder — Free Download
If Suzanne's episode surfaces one uncomfortable question, it's this: does your program have a coherent theory of how change actually happens — or does it just have a stated purpose and a lot of hope?
The Theory of Change Builder gives you a structured framework for mapping the outcomes your program is designed to achieve, the assumptions underlying your model, and the evidence you'll need to test them. It's the difference between a program built on conviction and one built on clarity. And when the board asks why the program matters, you'll have an answer that isn't a vision statement.
📍 FROM THE FIELD
I've been working with a government-sponsored entrepreneurship hub this month.
Their mandate: run 14 consecutive programs. Twelve pre-accelerators and two accelerators, back to back.
To make that operationally feasible, they had to make a lot of assumptions upfront. About what founders at this stage actually need. About how much time mentors would genuinely commit. About what "support" means when you're running at that scale and that pace.
Some of those assumptions were right.
A lot of them weren't.
Here's the thing — none of the assumptions were made carelessly.
They were made because discovery takes time, and time was the one thing they didn't have.
There's a real tradeoff here that most program designers don't talk about honestly: the more assumptions you bake in upfront, the easier the program is to deliver. Fewer variables. Cleaner ops. A replicable model.
But every assumption you make without testing it is a bet.
And when fourteen programs are riding on those bets, the cost of being wrong compounds fast.
Suzanne Vos asks a question inside ING that I think every program manager should steal.
"Do we have evidence for this?"
Not "does this feel right?" Not "did this work somewhere else?" Not "is this what we've always done?"
Do we have evidence — for this program, with these founders, in this context?
Most programs never ask it. Not because they're incurious. But because challenging a founding assumption can feel like undermining the vision. Like disloyalty to the people who worked hard to design it.
So the assumptions sit there. Untested. Quietly shaping everything.
Tony Robbins has a line I keep coming back to: "Success leaves clues."
He's right. And so does failure.
Evidence-based design isn't about slowing down or running endless discovery workshops. It's about finding the clues before you've built fourteen programs on top of a crack in the foundation.
The programs that scale without falling apart are the ones that treated their first cohort as a test, not a proof.
What assumptions is your program still running on?
ONE THING YOU CAN DO THIS WEEK
Before your next program meeting, write down the three biggest assumptions your current program design is built on. Then ask: "Do we have evidence for each of these? Or are we operating on gut feeling?" You don't have to test all three this week. Just naming them is a start.
I'll see you next week,
Yaniv
Comments