Tag Archives: minimum viable product

MVP buy-in

One of the biggest challenges product innovators in established companies face in defining an MVP is getting buy-in from internal stakeholders.

These could be senior execs, peers, other departments, partners, or even your boss.

You might say, “This is all about politics, and that just comes in the way of innovation.” That’s being naive.

You might say, “Consensus driven product development kills creativity and innovation.” You’d be right, but I’m by no means advocating a “decision by committee” approach.

You might say, “Internally defined products with no customer input leads to lousy products that fail in the market.” That statement is correct. I 100% agree with it. But it’s not the whole picture.

The reality is that product managers and corporate product innovators have multiple internal constituents to manage.

It is imperative that they somehow make everyone feel a part of the process. Else, they risk their product idea being run over, shelved, sidelined, destroyed before it’s even left the concept stage.

Lack of stakeholder traction can often be a bigger roadblock than customer traction.

I call these folks internalvangelists.

So how do you get internal traction on your product idea? How do you get buy-in on your customder-driven MVP without it getting railroaded by others?

How do you build traction internally and develop these internalvangelists?

You use good old fashion product management techniques. Specifically, by leveraging a process every Product Manager should know: roadmap prioritization.

My friend, Bruce McCarthy, has talked about the 5 pillars of roadmaps, the first 3 of which are:

  1. Setting strategic goals
  2. Objective prioritization
  3. Shuttle diplomacy

These same pillars can be used for defining an MVP and getting stakeholder buy-in.

Setting Strategic Goals

The first step is to capture your product strategy. You can use the Product Canvas to get started.

What’s great about the Product Canvas is it allows you to document your vision in a simple, portable and shareable way on just a single page. The trick is to be concise. The intent isn’t to capture every nuance of the customer’s problems, nor detailed requirements. Just stick to the top 3-5 problems and the top 3-5 key elements of your solution.

This forces you to not only sharpen your thinking, but also your communication with stakeholders. This, in turn, encourages more constructive feedback, which is what you really need at this stage.

Objective Prioritization

You’ve probably received a lot of internal input (solicited and unsolicited) on features for your product. Most have probably been articulated as “must-have’s” for one reason or another. Of course, you know that most of them are probably not really needed at this early stage, certainly not for an MVP.

To quote from the book Getting Real by the founders of Basecamp: “Make features work hard to be implemented. Each feature must prove itself.” For an MVP, each feature must be tied to tangibly solving a top customer problem.

Bruce discusses using a scorecard type system to objectively prioritize features for product roadmapping — in particular, assigning a value metric for a feature’s contribution toward the product’s business goals, and balancing it against a level-of-effort (LOE) metric. The exercise can easily be done in a spreadsheet or using almost any product management software.

A similar approach can be used to prioritize the features for your MVP:

1. Rank each Problem documented in your Product Canvas in terms of your understanding of what is the customer’s top-most problem to be solved, followed by the second, etc.

2. Map Solution elements to Problems. These may not necessarily be one-to-one, as sometimes multiple elements of your Solution may work together to solve a particular customer problem.

3. For each Solution element, identify if it’s a “must-have” for your MVP. Solution elements meant to solve customer Problem #1 are automatically must-have’s. The trick is in making the determination for the remaining Problem/Solution mixes.

4. Identify all features for each Solution element. If you already have a list of feature ideas, this becomes more of a mapping exercise. The net result is every feature idea will be mapped directly back to a specific Problem, which is awesome.

5. Mark each feature as “In MVP” or not. Be ruthless in asking if a feature really, really needs to be part of the MVP. (Tip: not every feature under a “must-have” Solution element necessarily needs to be “In MVP”.)

6. “T-shirt size” the LOE for each feature, if practical. Just L/M/S at this point. A quick conversation with your engineering lead can give you this.

Like with roadmap prioritization, this entire exercise can also be done via a simple spreadsheet. Here’s a template I’ve used that you can freely download.

The beauty of the spreadsheet is it brings into sharp focus a particular feature’s contribution toward solving customers’ primary problems. And an MVP must attempt to do exactly that.

Shuttle Diplomacy

To paraphrase Bruce from p26 of his presentation, this is probably the most important part of the process.

You need to get buy-in from your key stakeholders for your product strategy and MVP definition to be approved and “stick over time”. Bruce shares some excellent tips on how to do this on pp26-30.

When you practice shuttle diplomacy:

“A magical thing happens. ‘Your’ plan becomes their plan too. This makes [review and approval] more of a formality, because everyone has had a hand in putting together the plan.”

To be clear, you’re not looking for “decision by committee”.

As the product owner, you will still be looked upon as the final decision maker. (Remember to stand your ground). But you need to actively try to bring others along by encouraging input and providing visibility.

Lean Startup purists may vomit at this, but that ignores the realities of getting things done in an established company as a product manager. As Henry Chesbrough writes:

“You have to fight — and win — on two fronts (both outside and inside), in order to succeed in corporate venturing.”

This means corporate innovators “must work to retain support over time as conflicts arise (which they will).”

This means Stakeholder Development. And that requires shuttle diplomacy.

Download the MVP Definition Template for product managers for free.

How To Define An MVP: A Case Study

In my last post, I talked about how a minimum viable product (MVP) is not the smallest collection of features to be delivered. An MVP is basically an in-market experiment of a product idea that involves delivering real product to actual customers to get their feedback.

An MVP can be tested whether your idea is a brand new product or a new feature for an existing product.

And even if your product is software, your MVP doesn’t necessarily have to be software too.

Folks may be familiar with how Groupon started as a WordPress blog, called “The Daily Groupon”, on which the team posted daily discounts, restaurant gift certificates, concert vouchers, movie tickets, and other deals in Chicago area.

Food On The Table, a family meal planning and grocery shopping site eventually acquired by the Food Network, started by working with their customers individually, creating meal plans and shopping lists for them on spreadsheets and email, and then bought and delivered food items themselves.

So how do you go about defining an MVP for your product idea?

It starts with having a hypothesis for what features or capabilities you believe need to be delivered to your target customer in order to provide them value.

This is predicated on having done the hard upfront work of validating your customer’s problem (that it exists, it’s urgent, and pervasive), and then maybe even having tested a prototype of your solution vision.

If you feel you have a good enough understanding of your customer’s problem (pain point, job to be done, etc.), use that as a basis to identify what you believe are the must-have features for your MVP that are aligned with your solution vision.

Then test that MVP with real customers. Evaluate your results. Rinse and repeat.

To make this more tangible, here’s an example from my own experience.

For a product idea we had, we wanted to test our understanding of our customers’ top problems and get directional feedback on our solution approach. Directional feedback meant identifying the “right” handful of features to build first for early customers.

Based on some early customer conversations and market research, we developed a view of the problem domain. We sketched out our product vision on the Product CanvasTM, which allowed us to break down the problem domain into discrete problems and formulate testable falsifiable hypotheses around what we believed to be the top problems that our solution absolutely had to solve for first.

We built a clickable mockup defined by the key elements of our solution captured in our Product Canvas exercise. To keep things simple, we built a screen for each discrete problem to represent our solution vision — real html and css, in color, no lorem ipsum, with clickable interactions to represent the primary workflow through the screens.

We didn’t build out every interaction — just the main ones. We formulated a testable falsifiable hypothesis around the ability of each screen to solve a specific problem.

We then set up a number of customer interviews to test our problem hypotheses. During these customer conversations, we listened carefully to fully understand our customers’ world views and their current work flows, even noting the emotions in their voice and their body language (during in-person meetings, when we could do them) as they discussed their challenges and reacted to our screens.

We were deliberate and meticulous about documenting the results.

It turned out that while we had identified a viable problem domain, our view of what early customers considered as their chief problems was invalidated. We also learned that while our solution approach was generally in the right direction, there were features that we had not envisioned that early customers considered as must-have’s in the initial delivery.

As a massive bonus, we were actually able to garner a handful of very early customers who were willing to co-test the solution with us, further validating the fact that we had pricked a real pain point and were directionally correct in our solution approach.

As a primary outcome of this work we were able to understand our customers’ problem at a granular level, which helped prioritize the initial set of features to build. That drove the definition of the minimum viable product version of our solution.

And that’s what we did. We built just those features, and nothing else, and delivered it to those handful of early customers.

In fact, our first MVP wasn’t software. Our first MVP was more a concierge type service, sort of like what Food On The Table did — we “manually” delivered the service to each customer individually.

We learned a ton of really useful stuff. Things like what was really important to the customer, what features of the service they used more often than others, real insights into their workflow and how our solution could help improve it, and — crucially — what they were willing to pay for.

We used these learnings to then define a software MVP, and deliver it to early committed customers. The learnings from our “concierge” MVP experiment helped boost our confidence in defining the requirements for our software MVP. In other words, it was much less of a guess than it otherwise would have been.

We didn’t really bother with calling the software MVP a “release 1.0” or “version 1.0”, because that was irrelevant. We just focused on testing the solution until we received customer validation that it was truly providing value.

That gave us the confidence to know our product idea was “good to go” to scale up, put some real sales and marketing muscle behind it, and sell to more customers.

There’s no one way necessarily to approach an MVP. This is just one example of an approach. As Eric Ries states, defining an MVP is not formulaic: “It requires judgment to figure out, for any given context, what MVP makes sense.” Hopefully, this example gives you a template to define and test your own minimum viable product for your next great product idea.

I’ve created a handy primer on what is a minimum viable product. Download it below. I hope it helps you to become a pro at defining an MVP for your next great product idea!

 

An MVP is not the smallest collection of features you can deliver

Source: Spotify

Source: Spotify

There’s a lot of discussion and confusion about what is and isn’t a minimum viable product (MVP).

Worse, many execs have latched on to the term without really understanding what truly constitutes an MVP — many use it as a buzzword, and as a synonym to mean a completed version 1.0 ready to be sold to all customers.

Buzzwords are meaningless. They represent lazy thinking. And using “MVP” to mean “first market launch” or “first customer ship” means you’re back to the old waterfall, traditional project-driven software development, sales-focused approach. If that’s your approach, fine. Just don’t call what you’re delivering an MVP.

On the flip side, lots of folks in the enterprise world, including in product management, over-think the term. It gets lost in the clever nuances of market maturity, and a long entrenchment in the world of release dates and feature-based requirements thinking.

Many folks think of MVP as simply the smallest collection of features to deliver to customers. Wrong. It’s not.

The problem with that approach is it assumes we know ahead of time exactly what will satisfy customers. Even if we’ve served them for years, odds are when it comes to a new product or feature, we don’t.

Now, the challenge with the concept of a minimum viable product is it constitutes an entirely different way of thinking about our approach to product development.

It’s not about product delivery actually — in other words, it’s not about delivering product for the sake of delivering it or to hit a deadline.

An MVP is about validated learning.

As such, it puts customers’ problems squarely at the center, not our solution.

Reality check: Customers don’t care about your solution. They care about their problems. Your solution, while interesting, is irrelevant.

So if we’re going to use the term “MVP”, it’s important to understand what it really means.

Fortunately, all it takes to do that is to go back to the definition.


Download The Handy Primer “What Is An MVP?” >>


Minimum Viable Product (MVP) is a term coined by Eric Ries as part of his Lean Startup methodology, which lays out a framework for pursuing a startup in particular, and product innovation more generally. This means we need to understand the methodology of Lean Startup to have the right context for using terms like “MVP”. (Just like we shouldn’t use “product backlog” from Agile as a synonym for “dumping ground for all possible feature ideas”.)

Eric lays out a definition for what is an MVP:

“The minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.”

Eric goes on to explain exactly what he means (emphasis mine):

MVP, despite the name, is not about creating minimal products… In fact, MVP is quite annoying, because it imposes extra overhead. We have to manage to learn something from our first product iteration. In a lot of cases, this requires a lot of energy invested in talking to customers or metrics and analytics.

Second, the definition’s use of the words maximum and minimum means an MVP is decidedly not formulaic. It requires judgment to figure out, for any given context, what MVP makes sense.

Let’s break this down.

1. An MVP is a product. This means it must be something delivered to customers that they can use.

There’s a lot that’s been written about creating landing pages, mockups, prototypes, doing smoke tests, etc., and considering them as forms of MVPs. While these are undoubtedly worthwhile, and certainly “lean”, efforts to gain valuable learnings, they are not products. Read Ramli John‘s excellent post on “A Landing Page Is NOT A Minimum Viable Product“.

A product must attempt to deliver real value to customers. So a minimum viable product is an attempt — an experiment — to deliver real value to customer.

Which leads us to…

2. An MVP is viable. This means it must try to tangibly solve real world and urgent problems faced by your target customers. An MVP must attempt to deliver value.

So it’s not about figuring out the smallest collection of features. It’s about making sure we’ve understood our customers’ top problems, and figuring out how to deliver a solution to those problems in a way that early customers are willing to “pay” for. (“Pay” in quotes as it depends on your business model.)

If we can’t viably solve early customers’ primary problems, everything else is moot. That is why an MVP is about validated learning.

3. An MVP is the minimum version of your product vision. A few years ago, I had to build an online form builder app that would allow customers to create online payment forms without the need to write any HTML or worry about connecting to a payment gateway. Before having our developers write a single line of code to build the product, we first offered customers the capability as a service: we would get their specs, and then manually build and deliver each online payment form one-by-one, customer-by-customer. Customers would pay us for this service.

This “concierge” type service was our MVP version of our product vision. Of course, it wasn’t scalable. But we learned a heck of a lot: most common types of payment forms they wanted, what was most important to them in a form, frequency of wanting to make changes, reporting needs, and how they perceived the value of the service.

We parlayed these learnings into developing the software app itself — which, by the way, we delivered as an MVP to early customers to whom we had pre-sold the software product. (Yes, we delivered two different types of MVPs!)

Whether you take a “concierge” approach or your MVP is actual code, it most definitely does NOT mean it’s a half-baked or buggy product. (Remember viable from above?)

It DOES mean critically thinking through the absolute necessary features your product will need day 1 to solve your early customers’ top problems, focusing on delivering those first, and putting everything else on the backlog for the time being. It also means being very deliberate about finding those “earlyvangelists” that Steve Blank always talks about.

Ultimately, the key here is “maximum amount of validated learning”. This means being systematic about identifying your riskiest assumptions, formulating testable falsifiable hypotheses around these, and using an MVP — a minimum viable product version of your product vision — to prove or disprove your hypotheses.

Now, validated learning can certainly be accomplished via a landing page, mockup, wireframes, etc. And it may make sense to do these things. Super. But don’t call them MVPs, because while they may deliver value to you and your product idea, they’re not delivering actual value to the customer.

At the same time, the traditional product management exercise of identifying all the features of a product, force ranking them, and then drawing a line through the list to identify the smallest collection to be delivered by a given timeframe is not an MVP. Why? Because this approach is not predicated on maximizing validated learning. If you’re going to pursue this approach, go ahead and call it Release 1.0, Version 1.0, “Beta”, whatever. But don’t call it an MVP.

An MVP is about not just the solution we’re delivering, but also the approach. The key is maximizing validated learning.

I’ve created a handy primer on what is a minimum viable product. Download it below. I hope it helps you to become a pro at defining an MVP for your next great product idea!


Download The Handy Primer “What Is An MVP?” >>