Scale Your Sprints: How to Timebox Discovery Work
Imagine you’re a wedding planner. For sake of scheduling, you decide to give every bride and groom you book exactly six months of planning, regardless of their wedding size, venue, or budget.
On one hand, you’ve made your work much more predictable; on the other, you’ve set yourself up for a lot of waste and some dissatisfied customers. You’ve overshot the effort required for an intimate family affair, but you cut corners on that 300-person wedding in Central Park.
It sounds absurd, yet this is exactly what’s happening in the software industry right now. Don’t get me wrong — I’m ecstatic that agile development is now the norm. But it seems that in the chaos of agile transformation too many teams forget that agile wasn’t invented to work faster. It was invented to build better software.
I’m a huge fan of timeboxing, but if you want to build better software, you must realize that some activities can’t be completed within your two-week sprint. This is particularly true for design.
Others have put forth excellent models to incorporate design into agile. One of the early thinkers on this was Desiree Sy, who articulated the “staggered sprints” model in 2007. Getting design and agile to play nice was also the challenge addressed by Jeff Gothelf and Josh Seiden in their famous book, Lean UX, published in 2013. The latest approach is single-week design sprints, as popularized by Jake Knapp of Google Ventures.
All of these solutions have their place. I’ve seen staggered sprints work very well for teams that have high trust and work on discrete features; Lean UX is a great model for quick-and-dirty concept validation that satisfies many designers and doesn’t hold up release trains; and design sprints give us a convenient structure for Lean UX that is also exciting for stakeholders to participate in.
However, when I observe teams trying to apply these methods to very new product concepts — truly innovative stuff — something doesn’t feel right. You simply can’t validate your novel business model in a week. You can’t necessarily complete a technical proof of concept within a single sprint. And you might not want developers delivering production-ready features while the discovery team is still uncertain if the underlying product is desirable to users.
Creating something new and valuable takes a long time and dramatic iteration cycles. So while delivery-focused agile processes (Scrum, XP, SAFe) have their place, they make it too easy to start churning out high-quality features for a product that no one wants. What we need is an agile process that is adaptable for discovery.
In agile, work increments are measured in units of user value. You haven’t moved forward unless you’ve delivered an increment of working software.
In Lean Startup, that work increment is validated learning. And validated learning comes from running an experiment.
Different types of experiments simply take different amounts of time. Some take more time to plan. Others take more time to make. Others take more time to collect data. All of these factors should be accounted for and allotted sufficient time.
One of the most unique features of our Experiment Driven Design framework is that the cycle time is variable. That doesn’t mean you can’t timebox your experiments. It just means that you must scale the timebox to be appropriate for the experiment you are running.
How much time should I devote to my experiment?
To set your experiment timebox, you need to consider three factors:
1. How long will it take to build the assets required to run the experiment?
Once you know what you want to test and what criteria you will use to evaluate the test, consider what you need to build in order to run it. If your experiment is a “pitch MVP,” you might need to create a landing page with a sign-up form. You want it to be good enough to compel users to sign up. How much time will it take for you to write the copy? Design and code the first iteration? Set up the email form? Create all of the advertisements to send traffic to it? Depending on the speed of your team and the quality bar you set, it could take anywhere from a few hours to several weeks!
2. How long will it take to run the experiment itself?
Once you build the assets required for your experiment, how long will it take to obtain sufficient data to validate or invalidate your hypothesis? For experiments with qualitative measures, this is often a matter of the strength of the feedback or observations. In usability testing, for example, you can catch most issues after running only five or so tests. For quantitative experiments, it’s a different story. You need enough volume of usage to reach statistical significance. If you don’t already have traffic to your site, or a panel of people for your survey, it might take some time and advertising dollars to recruit the right users.
3. How big of a bet do you want to make?
Ultimately, time is money, so setting a budget for each experiment is a great way to establish a timebox. When we work with clients, we want to know what their budget is before talking about the scope of a project. We use that to set a fixed timebox, and then work with them to decide whether or not the project goals can be reasonably accomplished within that time frame. I advocate this practice for internal experiments as well, because it promotes creativity and collaboration while controlling for costs. Of course, if there’s no way to design an effective experiment within the budget that has been set, the project shouldn’t move forward.
Putting this to action
The first step is acknowledging that no matter how confident you are in a solution, building and releasing it for the first time is always an act of discovery. You can’t help but learn.
So the next time you set out to build something new, frame it as an experiment. It can be as big as an entire product launch, or as small as changing a button color. In addition to your usual budgeting and planning, consider the following:
- What is the outcome you expect?
- How will you know that it is working?
- What is everything you’ll have to build to determine that? What is the cost?
- How long will it take to evaluate?
- What will you do if it proves to not be working? What will you try next?
The best news: you don’t need to do this publicly in order for it to be effective. In fact, you don’t even need to sidestep your organization’s release train or project management structure to track experiments.
Simply document and track the experiment. You can do this is by setting up a Trello board just for experiment tracking. If you’re more of a spreadsheet person, use a spreadsheet. Or, if you want a more structured software solution, consider GLIDR.
Last but not least, send me an email if you’re brave enough to try this. Let’s share notes!