Learn-Driven Development
The Learning Loop

Phase 2: Slice

Slice what you validate, not necessarily what you build.

If building is so fast and cheap now, why not build everything at once?

You can. Sometimes you should. If an agent can build the whole feature in an afternoon, there's no reason to artificially constrain it. But building everything is not the same as exposing everything.

The bottleneck was never building. It's learning. And learning requires clear signal. When you release ten assumptions to users at once and something doesn't work, you can't tell which assumption was wrong. You've shipped a lot, but you've learned nothing.

Slicing in LDD is about controlling what you expose for validation, not necessarily what you build. Build big if the cost is low. But reveal small, so you can read the signal clearly.

Build big, show little

Feature flags, canary releases, progressive rollouts: these are all mechanisms for rationing exposure, not rationing code. The whole feature might exist behind a flag, but you only turn on one slice for real users. You validate one belief. Then the next. Then the next.

And don't worry about "wasting" build effort. Building, rebuilding, and refactoring now takes minutes to a few hours, not days or weeks. If the validation tells you the whole thing was wrong, you throw it away and rebuild in a different direction. The code is cheap. The learning is what you keep.

For solo builders, even if you vibe-coded the whole thing in one session, you can still deploy it progressively and watch what happens with the first slice before unlocking the rest. The loop doesn't require you to build small. It requires you to learn small.

What a good slice looks like

Thinking mode: Design thinking meets technical thinking. Figuring out the smallest thing you can expose that still answers the question. This is where product perspective and engineering perspective are most powerful together. On a team, this is a conversation, not a handoff. Solo builders: what's the thinnest version you can put in front of a real user to find out if you're right?

Inputs: The framed hypothesis, system architecture, existing code/context.

Output: A thin slice specification:

  • User journey: What specific behavior are we exposing and testing?
  • Acceptance criteria: What must be true for this to count as validated?
  • Non-goals: What are we explicitly not exposing yet (even if it's already built)?
  • Constraints: Feature flags, canary release, or other guardrails.
  • Context for agents: What should the agent know about the codebase, patterns, and constraints?

The technical details in the slice spec (architecture constraints, existing patterns, dependencies) are best written by whoever is closest to the codebase: an engineer, an agent that's been given the right context, or both. This isn't the PM's job. The PM frames the question; the engineer and the agent provide the technical context that makes the slice buildable. Each contributes what the other can't.

The slice should be small enough that when you put it in front of users, the signal is clear. One slice, one belief, one answer.

On this page