Your Strategy Isn’t Broken. Your Timeline Is.

Apr 10, 2026

Michael Gabrielle Colayco

Reading Time: 10 min

Part 5 of the Dev Engagement Series: Execution

Think about the last podcast you loved that just… stopped.

Not because the show got bad. Not because the host ran out of ideas. Because somewhere around episode 8, someone checked the download numbers, decided it wasn’t working, and quietly moved on.

This happens constantly in developer programs. Teams put out a blog series, a YouTube channel, a community online session. They do it for a few weeks. They check the numbers. The numbers don’t feel like they’re going anywhere. So they cancel the program and start thinking about what to try next.

Then six months later, the cycle starts again.

At Stateshift, we’ve watched this pattern play out across dozens of developer programs. And in most of the cases where something “didn’t work,” the strategy was fine. The timeline wasn’t.

The principle is called Ship to Learn. And it changes how you think about execution entirely.

When bad execution gets mistaken for a bad strategy

Here’s the mistake most teams make.

They treat the first few results as a verdict.

You publish three blog posts. Traffic is low. Conclusion: the blog isn’t working. You run two community sessions. Attendance is weak. Conclusion: developers aren’t interested in this format. You put out a podcast. Five downloads per episode. Conclusion: wrong channel.

But here’s what’s actually happening in each of those cases: you haven’t executed enough times to know anything yet.

One set of results isn’t a test. It’s a starting point.

This is one of the most persistent and expensive mistakes in developer engagement. Teams confuse the quality of early execution with the validity of the strategy. And when the strategy gets cancelled before it gets a real chance, all the investment that went into choosing it, planning it, and starting it gets written off.

The question isn’t whether your first few attempts landed. The question is whether the right metric is trending in the right direction over time.

The Ship to Learn principle

At Stateshift, we use a framework we call Ship to Learn. It originated from how the best teams at GitHub approached development: you put something out there that’s consumable, gather the data, and use what you learned to make the next version better.

It’s not a philosophy about perfectionism versus speed. It’s a framework for execution that separates learning from judgment.

Here’s how it works.

  1. Ship something. A blog post, a video, a community session, a newsletter. It doesn’t need to be flawless. It needs to be out there and consumable.
  2. Wait. Give developers time to engage with it.
  3. Look at the data. Not to evaluate whether the strategy is working, but to understand how this specific piece performed and why.
  4. Pick one thing to improve. Not five things. Not a full rebrand. One thing. Apply it to the next attempt.
  5. Repeat.

That’s the whole framework. The power isn’t in any single step. It’s in the accumulation of small improvements over time, each one informed by real data rather than gut feel or panic.

Why good programs keep getting abandoned

If Ship to Learn sounds simple, that’s because it is. But simple doesn’t mean easy.

Most teams running developer programs are also shipping products, managing support queues, talking to customers, and trying to hit quarterly numbers. Developer content and community are often the first things to get deprioritized when capacity tightens.

And when those teams do find time to check in on the program, what they see is a handful of early results that don’t look impressive. At that point, the temptation isn’t to iterate. It’s to pivot.

There’s also a subtle psychological trap at play. You’ve probably heard the often-misattributed quote that the definition of insanity is doing the same thing over and over and expecting different results. Most people apply that logic to their developer program after three attempts and decide they need a completely different approach.

But it doesn’t apply here. You’re not doing the same thing over and over. You’re doing a slightly improved version each time, informed by what you learned from the last one. That’s not insanity. That’s how anything gets good.

The power of Ship to Learn

Marques Brownlee, known online as MKBHD, is one of the most respected technology reviewers on the internet, and one of the best examples of how powerful Ship to Learn is. His YouTube channel has nearly 21 million subscribers. But when he recorded his 100th video back in 2009, he had roughly 75 subscribers.

Marques Brownlee recording his 100th YouTube video during his early channel growth, when he had 74 subscribers.


One hundred videos. Seventy-five people.Most of us would have stopped long before that. Most of us do stop long before that. But Brownlee kept going because he was focused on improvement, not instant results. Each video got slightly better than the last. The production quality improved. The structure tightened. The audience grew. Not because of any single breakthrough, but because of consistent, compounding improvement over time.

Another example is a client we worked with at Stateshift, Journey Apps, who had been struggling with their video content. The videos weren’t landing the way they hoped, and the team was starting to question whether video was even the right channel for their audience.

Instead of switching channels, we went back to the basics. We reviewed the opening seconds of their videos, looked at thumbnails, titles, and click-through data. We identified one specific area to focus on.

They made those changes, published the next video, and it outperformed everything they’d put out before.

Stateshift client journey app showing a creator’s best-performing YouTube video and its performance insights.


That result didn’t happen because they found a magic formula. It happened because they shifted from asking “does video work?” to asking “how do we make our specific video content work better?” The difference matters more than it sounds.

The judgment calls that determine whether you stay or go

Applying Ship to Learn well comes down to a handful of judgment calls, not a checklist of tactics.

Choose the right metric before you start. This is the most important step and the one most teams skip. You need to be measuring the right metric, and that metric needs to be trending upward over time. Slowly is fine. Flatline is the signal to investigate. But a gentle upward curve means you’re in the right territory. Keep going.

For a blog, measure unique visitors. For a podcast, measure downloads. For a YouTube channel, measure watch time. For community sessions, measure returning attendees. Pick the metric that reflects genuine engagement, not vanity signals like raw follower counts.

If you don’t agree on the metric before you start, you’ll end up measuring the wrong thing and drawing the wrong conclusions.

Commit to a cadence you can actually sustain. The most common failure mode isn’t bad strategy. It’s overcommitting to a publishing schedule that can’t be maintained, burning out after six weeks, and deciding the channel doesn’t work. If you can only publish once a month, publish once a month consistently. Consistency outperforms frequency every time.

Pick one thing to improve per cycle. Not three things. Not a complete overhaul. One targeted change per iteration is the only way to know what’s actually moving the needle. Improve the title on your next post. Improve the intro on your next video. Improve the question format in your next session.

Give each change enough runway to generate real signal. Developer content especially needs time to find its audience. A blog post published today might get picked up by search three months from now. A community session might attract a completely different audience by month five. Don’t evaluate a change after one data point.

Know when to actually stop. Ship to Learn isn’t about running a failing program indefinitely. If your core metric has genuinely flatlined for several months despite consistent execution and real iteration, that’s worth investigating. Is it the channel? The topic? The format? The audience? A real plateau deserves analysis. A slow start doesn’t.

What you leave behind when you start over

There’s a reason teams that operate this way tend to build stronger programs over time.

It’s not just about finding what works. It’s about accumulating knowledge that’s specific to your audience.

Every iteration teaches you something. Which topics resonate. What format developers prefer. What questions come up repeatedly. What content earns shares versus what gets scrolled past.

That knowledge doesn’t transfer to a new channel or a new program. It lives in the decisions, tests, and refinements you’ve made in this one. When you abandon a program too early and start fresh, you absorb the sunk cost but leave all the learning behind.

At Stateshift, we’ve worked across hundreds of developer programs at companies ranging from early-stage startups to established platforms, and this is one of the most common patterns we help them move away from so they can build truly impactful developer initiatives.

Your developer program probably isn’t broken

If you’re reading this because something in your developer program isn’t getting the traction you expected… the strategy is probably fine.

The real question is whether you’ve iterated enough times, on the right metric, to give it a fair test.

If the core metric is trending upward… keep going.

If it’s genuinely flatlined… dig into why before you decide anything.

If you’ve been running it for fewer than a dozen cycles… you’re still in the learning phase, not the evaluation phase.

Ship to Learn isn’t about patience for the sake of it. It’s about making sure that every time you execute, you come away slightly better at execution. That’s what compounds.

If you want help applying this to a specific program, book a call with Jono. We can usually identify the one thing worth changing in a first conversation.

Your graph doesn’t need to go up fast. It just needs to keep going up.

Common Questions

When do you keep going vs pivot in a developer program?

Follow your core metric. If it’s trending upward, even slowly, keep going. If it’s flat for months despite consistent execution and real iteration, investigate. At Stateshift, we often see teams pivot too early, before the data is meaningful. Pivot only when data shows something foundational isn’t working.

What does “Ship to Learn” mean?

It means treating early outputs as experiments. You publish, measure, improve one variable, and repeat. The goal isn’t immediate results, it’s learning what works. Over time, small improvements compound. Early content isn’t for winning, it’s for understanding.

How many iterations before traction?

There’s no fixed number when you iterate your developer programs. What matters is trend, not count. If your key metric is improving, you’re on the right path. If it stalls despite iteration, investigate. Don’t rely on arbitrary milestones, let direction guide decisions.

Why do developer programs stall early?

Premature abandonment. Teams see weak early results and stop too soon. They absorb the cost of execution but miss the payoff from accumulated learning. Early stages are meant to be inefficient, that’s where insight is built. The teams that succeed stay long enough for that learning to turn into momentum.

Written by:
Michael Gabrielle Colayco

Michael creates content for the Stateshift blog, social media, YouTube channel, and more. He is passionate about building incredible content.

Get the SHIFTsignal

SHIFTsignal is not a boring newsletter filled with links
it is a FREE weekly dose of high quality insights, techniques, and recommendations for building your movement sent directly to your inbox.