“It’s more like mediocre value proposition”
Since its launch in 2011, The Lean Startup has been one of the most talked about books in business circles. And, no concept in that book has caused fiercer debate than the “Minimum Viable Product” (MVP).
It’s a polarizing concept. Detractors feel it encourages the release of crappy, uninspired products. Proponents point out the time and money that it can save.
At Payable, we’re fans of the approach, but we try not to get dogmatic about it.
We’re perfectionists at heart. But, left unchecked, that inclination could torpedo our business. For any startup, it’s a race against time and burn rate to find a viable business model.
That’s why the MVP approach has been so helpful for us. It forces us to fight our natural tendencies and ask, “how can we learn faster?” Instead of building a polished feature for a month, we try to figure out what can be done in a day or, at most, a week.
But, doing something “fast” and doing something “well” are often at odds.
What follows is the framework that we use at Payable to reconcile our need to learn quickly with our desire to make awesome products.
How much to build?
It’s important to remember that MVP is a decision-making tool. It informs the product development process, but is not the process itself. That’s why we use these questions to figure out what we need to build:
• What does the team need to learn?
• What will our users see?
• What will the team need to believe the results?
What does the team need to learn?
It sounds obvious, but this step is often ignored. Teams often start from a point of building a “lightweight version” of whatever it was they were going to build anyway. But, that doesn’t necessarily tell your business anything useful.
The focus should be on what you need to learn in order to make a decision.
That’s the whole point. It’s not just about “trying stuff” or “running tests.” You need to learn critical information about your business, make a decision, and move on to the next problem.
We’ve found that two questions are useful for distilling the discussion of “what we want to learn” down to a useful point:
1) What’s the riskiest assumption that currently exists in our business model?
2) Why is the team so interested in X (whatever the thing is we keep circling around)?*
We used this technique over the summer in the midst of Y-Combinator.
Our vision for Payable is to get people paid instantly for the work they do. We know that the people doing the work want to be paid faster. But, what about the people paying for the work? They’re incentivized not to pay. It’s in their best interest (financially) to drag out payment as long as possible. So, for us, our riskiest assumption was:
Our user’s customers will pay an invoice upon receipt from a mobile device
It goes against the way invoicing is traditionally handled. But, if it’s true, paints a path for us to dramatically speed up the time it takes our users to get paid. And, that’s a significant business opportunity.
*If you’re playing it by the book, you should always test your riskiest assumption followed by the next riskiest assumption and so on. But, that assumes you’re an accurate judge of risk. If the team keeps coming back to a topic passionately, it usually means there is some unstated belief. Ask “why?” to uncover it and get it out on the table for discussion.
What will your users see?
This question is the key to reconciling MVP with building something awesome. In short, if your users will see or interact with it — build it. If they don’t — fake it.
Jess Lee over at Polyvore calls this “Fake Door Testing.” She used the technique early on to test out whether shoppers would buy outfits assembled by their community of users. They created real ads, pages, and user interfaces for shopping and purchase. But, behind the scenes, it was the Polyvore team creating the outfits, stuffing boxes, and shipping them.
We used a similar line of thinking to determine what we needed to build (and just as importantly, what we didn’t) to test whether our user’s customers would pay quickly from their mobile devices.
The key components of that test were:
• Collect user’s bank info for receiving payment (build)
• Create Invoice (fake)
• Send Invoice (fake)
• Allow user’s customer to pay (build)
We faked the invoice creation and sending because neither our user or their customer needs to see them.
We automatically created an invoice whenever one of our users entered hours in Payable. An email alert saying “Mat worked 4 hours for Wally West” would go out to our whole team and we’d scramble to manually plop that data into a responsive HTML template. We’d then email a link to that web page to the customer. Boom. Invoice.
The original invoice email testWe built the money movement aspects because our users interacted with it. Also, it was people’s real money on the line. That’s not something to be trifled with. Lastly, Stripe makes this a task measured in days, not weeks. So, we’re still keeping true to our need for speed.
What will the team need to believe the results?
This is a subtle point I’ve noticed from years of doing MVPs at Intuit and Payable. If the goal is to make a decision, then that decision needs to stick. But, I’ve seen plenty of teams (especially mine) discount the results because of how many corners were cut (“Oh, that was such a crappy experience, no wonder nobody signed up.”).
If that happens, then your team is just going through the motions. No decision was really made. Nothing was really learned. A little bit of product was built that will probably be discarded. That’s wasted time. And wasted time is your business’s arch-nemesis.
So, ask your team if what you’re planning to build (and fake) is “good enough.” Will they believe the results when they come in?
You may have a team that doesn’t like to release a product or feature until it’s perfect. In that case, you may release something that’s not exactly “minimum.”
That’s ok. The point is to move faster and make decisions. If you’re doing better on both those fronts, that’s progress.
In our case, that invoice we “faked” with an HTML template still had some nice animations on it. We didn’t need to build those. But, it made the test feel more “real.” It couldn’t be so easily discounted.
And, it was a major improvement from our previous feature that was a “don’t release it until it’s ready” monthlong boondoggle. Progress.
When our test was complete, we had our results. There was no excuse to discount them.
It turns out some people didn’t get paid. Some had customers who still cut them a paper check outside our online payment system.
But, others got paid in less than an hour. In the end, the average time to payment was just 4 days. And, the most common outcome was same-day payment — exactly what we were hoping for.
We had the data we needed to pursue this direction and build it for real.
Bringing it all together
If your product team is like ours, there’s a mix of different frameworks and theories running through your heads at any moment about how you should build a product or business. It’s tough to keep it all straight.
The truth is…there is no “right.”
You have your team where it’s at today and where you’d like it to be. For us, we’re trying to learn faster while still making a product we’re proud to call ours. This approach has helped us get closer to where we need to be.