Tag Archives: validated learning

Do You Have A Problem Worth Solving?

In order to pursue any product idea — a new product, or a new feature for an existing product — you must make sure it’s a problem worth solving.

If it doesn’t solve a tangible, real problem that lots of people are facing, and are willing to pay to have solved, it’s not worth spending your time on it. Move on to your next great idea.

So how do you go about figuring out if your product actually solves a problem that’s worth solving?

And what makes a problem worth solving?

In their book, Tuned In, authors Craig Stull, Phil Myers and David Meerman Scott talk about this in detail.

In order to know whether your product idea solves a problem that’s worth solving, it must it must satisfy the following criteria:

  1. Is the problem urgent?
  2. Is the problem pervasive?
  3. Are customers willing to pay to have the problem solved?

These three questions need to be answered before investing a ton of money or resources into developing and launching your product.

If the answer to any of these questions is “no”, then you need to pivot.

Let’s talk about each of these in turn.

Is the problem urgent?

Any product idea you’re pursuing must solve a problem people really care about. It needs to be a real pain point, a critical need, or a super important job for them.

The pain may manifest as costing them money, time, resources, effort, credibility, or some other significant inconvenience or frustration. Even perhaps emotional or physical pain.

The need or job could be social (look good, gain status, etc.), emotional (feel better, feel more secure, etc.), or functional (an important job that must get done).

If it’s truly a pain point or priority need/job, people will have expended time or effort to have tried to solve the problem. They may even have “hacked” together their own solution, or spent money on trying to solve it. This is what’s meant by the problem being urgent.

But why does this matter? Why is it so important to validate the urgency of the problem?

Because nothing is more frustrating than working yourself silly trying to solve a problem that either doesn’t exist yet or that people describe as “not a big deal”.

Here’s an example:

I hate taking out the trash.

I have to do it 2x a week, put it out on my driveway in the evening so the garbage truck can pick it up first thing the next morning.

I especially hate doing it in the winters when it gets really cold.

Is it a job I need done?

Most definitely!

Is it a pain?

Yes!

But have I done anything to solve my problem?

No.

I keep complaining about it. But I’ve done nothing to change the situation.

It’s just not a priority for me — it’s just not urgent enough.

As product people, we’re naturally wired to look for solutions. So we quickly and easily fall in love with our solutions.

(Thanks, Ash Maurya, for representing this first on your Lean Canvas.)

But we need to care about PROBLEMS.

So if you’ve got a product idea, you need to first care about your customers’ problems.

This is true whether you’re thinking about your next new feature or a wholly new product.

Remember:

Customers don’t care about your solution. They care about their problems.

Just make sure the problem is an urgent one.

Is the problem pervasive?

The urgent problem needs to be one felt by a large enough number of people to make it worthwhile for you to develop and sell your product.

Why?

Say one product will cost $1,000 to make, and only two people in the whole world will pay you $10,000 each for it, and another product costs $10,000 to make, but 10,000 people will pay you $10 each for that one.

Which product is better worth your time?

Quite simply, if enough people aren’t experiencing the problem, then the market potential for your product idea isn’t big enough, and it’s not worth pursuing your product idea. Period.

Even for a new feature, you need to size the market opportunity for it.

Your company has a choice of whether to focus its resources on developing feature A or feature B. In fact, it has a choice of whether to have its resources focused on your new feature idea or something else entirely.

So no matter whether you’re pursuing a new feature or a new product, make sure it’s solving a problem that’s pervasive.

Are they willing to pay to have the problem solved?

This is critical — you may find a lot of people are complaining of the problem you’ve identified… but are they willing to pay to have it solved?

Most people have all kinds of problems that they’re just not willing to pay money to solve.

For example, I may whine about taking out the trash, especially in the bitter cold of winter.

And it becomes an urgent enough problem that I finally get my teenage son to do it. (Hey, it builds character.)

And lots of other people may be doing the same thing as me.

But neither they nor I are willing to pay to have someone come to our house twice a week to put the garbage out by the curbside.

Even for an existing product, a new feature must be able to create some tangible business value. Will customers pay for the new feature? Will the new feature justify a higher price for your product? Will it increase customer lifetime value, or accelerate new customer acquisition?

As long as you’re in a profit-making enterprise, it’s worth solving an urgent and pervasive problem only if the people with the problem are willing to pay for your solution.

Are you done? Not quite…

In order to decide whether to pursue your product idea or not, you need to consider a couple of additional things.

First, it’s possible that your company may have (likely has) a point of view (spoken or unspoken) on what it considers worthwhile revenue opportunities.

For example, at a $7 million company, a $50k opportunity could get the CEO’s attention…

…But at a $500 million company, anything less than $250k may not get much interest.

If so, it’s important to consider whether your product idea meets this threshold to be worthy of consideration.

Second, your product may have specific strategic business goals, such as driving new customer revenue, or generating expansion revenue from existing customers, etc.

If so, you’ll need to evaluate whether your product idea contributes in a meaningful way to achieving those goals.

In other words, is there enough monetizable value in your product idea?

If your product is currently generating $50 million in revenue with a goal to grow 15% in the next year, and you estimate your product idea could drive $500k in additional revenue, that means it will contribute less than 7% toward that goal.

Whether that’s good enough will depend on how it compares to other ideas that contribute toward achieving the goal, or whether it can be combined with other ideas in some rational way as part of a theme — then it will come down to how the theme in its entirety contributes toward the goal.

So to recap:

To pursue any product idea, make sure it’s a problem worth solving.

This means the problem must be urgent and pervasive, and your target customers must be willing to pay to have the problem solved.

Furthermore, your product idea must have enough monetizable value to contribute meaningfully toward your company’s strategic goals.

Fortunately, it’s not that difficult to learn how to do this. 🙂

No, I Can’t Give You A Roadmap For Our New Product (Yet)

Cartoon by Roger LathamA fellow product manager that’s working on a new product idea recently wrote to me:

“Common feedback I receive from our our engineers and executives is they don’t have a good grasp of the product vision. They say, “OK, that’s great, we can build that. But where are we going with this if we find the hypothesis to be true? What’s the long term vision for the product?” In essence, they’re asking what’s the end goal in 2-5 years, and if you show me that I’ll have a better sense of the architecture and tools I need to account for.”

This product manager is right at the very initial stages of their product idea, where he still needs to test the problem and solution hypotheses. But he’s already being asked for a long-term product roadmap! Sound familiar?

While the request may seem perfectly reasonable, it’s misplaced at such an early stage. The question about architecture and tools may also seem perfectly reasonable on the surface, but it’s a scale question, and is not the right one to be focusing on before you know if you’ve even identified the right customer problem and have proof that your solution approach is viable to solving that problem.

Execs are trying to assess the potential market opportunity, the underlying investment that will be needed, and the speed to achieving ROI. So naturally, they want to see the long-term roadmap. But at such an early stage, you’re likely in no position to be able to answer the question.

Even at the conceptual stage, you may have a list of potential features in your mind. You could prioritize them using one of the many scorecarding techniques written about by seasoned product practitioners. (See this, this, this, and this, to reference just a few.) These are all very valid techniques written by product folks who really know their stuff.

Doing that so early is a waste of time, though. Creating a product roadmap is predicated on having a coherent product strategy, which is predicated on having a validated understanding of who are your customers, what are their pain points, and whether they’ll find your solution valuable. If you don’t even know if customers will buy your solution, what’s the point in having a roadmap?

So when do you develop a roadmap for a new product?

For a startup product, the first step is always to identify the customer segment and customer problem. Quickly capture your product vision, formulate your customer, problem and solution hypotheses, and systematically test them. As you go along, you need to identify early adopters to whom you can deliver your solution — typically you build the product for these folks first. If practicable, test pricing at this stage as well.

Figure what you absolutely must deliver to these folks to solve their #1 problem, and work like hell to deliver it as quickly as possible. All other features get cut from scope and sit in the backlog.

After delivering this minimum viable product (MVP) you need to actively gain feedback from these early customers. You’re using your delivered product to gain deeper insights into the customer’s problem, and you’re trying to understand what you need to improve in the product to (1) get these customers to stick, and (2) attract more new customers.

In addition, now that you have an initial set of engaged customers, you can also try to test their second level set of problems or discover new ones. Understanding those problems may identify new enhancements and features. You’ll now be armed with a set of improvements, fixes and new ideas that you can put into the backlog.

If you have a sales force and have armed them to sell your MVP, make sure you’re actively gathering feedback from them as well. You may uncover opportunities to evolve your sales messaging and positioning. You may also uncover feature gaps. If so, you can put the into the backlog as well to earmark for further validation. Pay particular attention to customer feedback that’s preventing a customer sale.

You’ll have a pretty good backlog at this point, so you can now start building an initial roadmap. Start by prioritizing the backlog based on a reasonable customer-centric set of criteria. I typically skew my priorities heavily toward voice of customer (VOC) feedback. While at any stage of the product lifecycle, features should solve tangible customer problems, it’s even more important at this early stage.

Also factor in the company’s strategic goals. For example, if the company’s focus is retention, features that create stickiness may carry more weight; if the focus is growth through customer acquisition, then sellable features may be more important; if it’s expansion revenue — i.e., greater revenue from existing customers — features that drive engagement and up-sells may take priority.

Make some allowance for operational issues. You may not necessarily have a scale problem yet, so these type of issues should not take precedence over VOC or driving revenue; however, you don’t want to completely ignore technical debt or reasonable operational fixes.

Once you have a prioritized list, socialize it. (Read this post by Bruce McCarthy on using “shuttle diplomacy” to get buy-in.) For the top priority items on the list, get t-shirt sizing from Engineering, and make a final call to sequence out the items based on customer and business value vs. feasibility. Now you’ve got a validated product in the marketplace with a decent first-pass roadmap that you can build upon. Go forth and conquer!

RIP PRDs. Long Live “Agile Conversations”

I don’t write PRDs.

Product Requirements Documents.

They take too long to write and are typically outdated by the time they’re “finished”, which, it turns out, they never are.

They’re never finished because the market changes. Customers’ needs evolve. Competitors change conditions. New technology changes everything.

Worse, they typically end up creating even more documentation.

Years ago, I worked in a company that followed a pretty typical waterfall process. Product Managers produced PRDs. (MS Word docs.) Big, multi-page docs that contained sections like “Product Overview”, “Business Objectives”, “Features”, “Personas”, “Use Cases and User Scenarios”, and a detailed itemized list of “Functional Requirements”.

Then Business Analysts translated them to “Functional Specifications”. (A bigger MS Word doc.) And then a System Designer translated those to “Technical Design Specifications”. (Yet another even bigger MS Word doc.)

Crikey, we weren’t building an aircraft carrier! We were just trying to build a software app.

Imagine if something had to change in the PRD? (Which it always did, of course.) That change had to get propagated down to each document.

More writing…

But here’s the funny thing: these documents never seemed to reduce the need for the humans involved in the project to talk to each other.

Each of these documents were typically followed by meetings. Meetings to discuss the very documents that were produced. To go over them page by page, line by line.

Going over such a large document takes time. So we’d schedule 3-hour, 4-hour, 6-hour meetings. (No kidding.)

But no one likes long meetings. So after a minor outcry, these meetings were reduced to 1.5 to 2 hours tops. The result was the poor Product Manager (or Business Analyst or System Designer) was now pressured to run through the same big document in less time.

Of course, it didn’t mean people had fewer questions, though!

So inevitably we’d run out of time and have to schedule a follow-on meeting to get through the rest of the document and answer questions we weren’t able to get to.

It’s not easy to coordinate the schedules of so many busy people…

Net was the original 3 or 4-hour meeting was now spread across multiple sessions that took place over several days or even weeks, and we ended up spending more time in total.

More time in writing and discussing documentation.

This could take weeks. Often, months.

And don’t get me started on the “change request” process!

And with so much time spent in writing and discussing the documentation, you’d think the output — the actual product that was delivered — was built as expected.

Nope.

It was amazing how often, despite all the effort invested in documentations and meetings, some particular piece of functionality wasn’t delivered as originally envisioned.

So often it would be due to some disconnect between the various documents everyone was trying so hard to keep synchronized.

I swear, it would make me want to rip my hair out (of which I seem to have less of with age!).

We spent so much time producing, coordinating, verifying, validating, and CYA-ing the documentation that we didn’t do the thing that was most important:

Delivering value to customers as quickly as possible.

Our time is better spent interacting with customers, testing solutions delivering product to them, getting their feedback, and acting on that feedback as quickly as possible.

After going through this one too many times, I vowed never to do it again. That’s when I began experimenting not only with agile software development practices, but customer development and other lean strategies to test, validate and iterate on the business planning aspects of delivering software and digital products and services.

Over time, I’ve developed a more lightweight, human-friendly, and — yes — agile approach to testing, building and launching products. One that has a proclivity toward action and conversations, is fueled by customer centricity, and is predicated on delivering business value and speed-to-market.

To be honest, this approach isn’t particularly revolutionary, and I certainly can’t lay claim to having invented any aspect of it. It’s all pretty much what’s covered in the manifesto for agile software development, and the good news is these practices are being increasingly adopted in different companies and industries for different types of software products and digital initiatives.

And yet, every week I get an email like this:

“As you preach, long MRD/PRD’s make no sense. However, there is some need to provide our business analyst with some form of requirement. What do you do in these cases? I have a Feature Requirements doc I created, but I sometimes feel it is overkill. What would you recommend?”

People are increasingly aware of agile and lean practices. The agile manifesto is in the public domain. There are books on Lean Startup and customer development practices. But what this email (and countless others I receive) shows is that it’s one thing to be aware of a thing, and it’s another to actually put it in practice.

In fairness, that’s understandable. For one, people have different learning curves, in aptitude and even willingness to change. Old habits can die hard. There are also different environments, organizational structures and cultures to consider. Innovating in a startup is different than innovating in a global enterprise company. And then there’s special snowflake syndrome.

I’m on a mission to help as many of these product managers as I can. In my reply to this product manager, I shared the specific practices, activities and tools my teams and I use to replace the overblown PRD process. And I want to share them with you.

Here’s my reply (almost entirely copy-pasted here):

  • I no longer write MRDs, PRDs, or any sort of traditional functional spec. Haven’t written one in years. I don’t have my teams write them either. They’re a total waste of time.
  • If we have time, budget and bandwidth, we’ll always get a Designer to create the screens before getting it to Engineering. Doing otherwise is a waste of Engineering’s time. Only caveat is I keep my Engineering Head / Chief Architect informed and involved.
  • Since the Designer is the initial recipient of the stories, we can write them at a fairly high level. We don’t get hung up too much on granular acceptance criteria — the use case, user goal or job story is much more important at this stage.
  • As such, at times we’ve provided just a 1-pager with bullets. Because design is iterative, we lean more heavily on interactive ongoing conversations than documentation.
  • If we don’t have time, budget or bandwidth for a Designer, PM just hacks the screens — Balsamiq, ppt, whatever. Not ideal, but sometimes you just have to make do. I’ve literally sent a photo of a whiteboard doodle, and written the story around it.
  • As much as possible, for major new features or flows, we will do customer validation. Yep: phone interviews with screen shares. 5 may be good enough.
  • We don’t have a Business Analyst, so PM writes the stories directly, including me. It’s not hard, and removes a middle man. Not saying a BA/BSA is never needed — just saying in our case we haven’t had the need for one, and have managed fine.
  • When ready to provide the stories to Engineering, we do write more granular level stories. Granularity is determined by the detail put into the screens. Let’s say we had a designer mockup every detail — every click, button placement, font, color, and even copy —then it’s just easier to add the mockup as support documentation and point to it in the acceptance criteria. If the design was more high-level, like a ppt hack or my silly whiteboard doodle, naturally I need to provide more specificity.
  • We use Aha.io to capture ideas, convert them to features, write stories, attach supporting artifacts, do prioritization, and map out an executable roadmap. It has some super cool features:
    • It will spit out a req doc from the stories you write. This is helpful if you’re using a 3rd party dev firm and need to provide them a written doc during the initial planning stages. So we no longer need to use MS Word.
    • It has an awesome Jira integration feature: I literally click a button and — boom — everything is automatically placed in the Jira backlog.
    • It allows me to publish out status on our roadmap execution to senior execs. This is extremely helpful, as it gets me away from PowerPoint and Excel crap.

Ultimately what matters most is joint understanding between Product Management and Engineering (and Design, if you have it). If there’s a good relationship, all sides can come to an agreement on how the reqs should be delivered. And remember: regardless of how the reqs are documented, ongoing conversations between PM and Engineering is the key.


Are you still writing PRDs/MRDs? Or have you moved on to more effective processes? Share in your experience in the comments below.

5 Steps To Validate Your Product Idea Without A Product

Here’s a scenario:

Top Exec at your company comes to you and says, “Yesterday, I was talking to Big Industry Player and they mentioned how they have Shiny Object. I think we should have Shiny Object too. How fast can we get it done?”

Sound familiar?

At last weekend’s ProductCamp DC event, my co-founder and I hosted a session called “Tales From The Product Frontlines”. Our first topic was about this very scenario, and it so energized the participants that it took up 30 of the 45 minutes we had been allotted!

During the session, we asked product people questions like:

  • What do you do now to vet new ideas, whether your own or from some other source?
  • How is that working?

Most folks talked about having some sort of governance structure, such as an Executive Steering Committee to vet new ideas based on some established criteria. Many talked about the need to create a business case, and some advocated that Product Management is best suited to do that, regardless of where the idea came from. Yet, after the session, every person I spoke with told me they felt writing a business case was a total waste of time.

Why is this? Because the old ways just don’t work.

Pragmatic Marketing’s 2013 survey revealed that product managers spend over a month’s worth of time writing business cases. These business cases are filled with highly assumptive 3- or 5-year projections that are used to support significant investment asks. It’s no wonder we hate this — we’re staking our professional credibility on these unvalidated assumptions!

The problem is we were never taught a systematic method by which to obtain the crucial information needed to inform a business case.

So what’s the solution?

Let me pause here to say if you’re hoping I have a magic secret for quickly validating a product idea, or if you’re totally married to writing business cases or MRDs, then stop. This post isn’t for you, because:

  • It’s different and requires you to THINK.
  • It’s hard work.
  • It can take some time to get results.

But it works. I’m writing for the 10% who (1) realize I’m telling the truth (that the old ways just don’t work), and (2) are willing to try something different.

(Thanks, Kevin Dewalt, for putting this so well, and forgiving me a little plagiarism!)

The solution is validated learning

Bottom line is spending time writing a big document is a colossal waste of time. Instead, focus on validated learning.

Because any new product idea is based on assumptions, those assumptions need to be validated. This can be achieved by formulating testable falsifiable hypotheses around the riskiest assumptions, and rigorously testing these hypotheses.

The process of validation can be outlined in 5 steps. Each step generally follows this meta-pattern:

  • Formulate a testable falsifiable hypothesis.
  • Test the hypothesis.
  • Analyze results and learnings.
  • Decide to pivot or persevere.
  • Repeat.

I’ve used these this process to validate ideas for software products, but I imagine it could be adapted for any product concept. (So give it a try and let me know your results!)

1. Write down your customer hypothesis.

Most folks typically start with the solution first, which is the wrong place to start.

You need to take a step back and think very deliberately about your customer. This is true whether you’re pursuing adding a new component to an existing product, or are pursuing a truly new product.

Note, by customer I mean the individual who will buy your product. Even in B2B, you need to think about the specific individual or set of individuals to whom you will need to sell your solution. That’s your customer.

2. Write down your problem hypothesis.

What problems does your idea solve for the customer? One way to do this is to think about the goal or job the customer is trying to accomplish. For example: “I believe [customer] has a problem achieving [goal].” Or: “I believe [customer] is trying to accomplish [job], because [desired benefit].”

I typically use the Product CanvasTM to do this, as it allows me in a focused manner to break down the problem domain into discrete problems, and then formulate hypotheses around what I believe to be the top problems that my solution absolutely has to solve for first.

3. Validate your customer/problem hypothesis.

Now it’s time to test your hypothesis. You do this by talking to folks you believe meet your target customer demographic.

Be sure to define a minimum success criteria, which is the minimum amount of data you will need from the test to justify investing more time, effort or resources into proceeding with the idea.

When you’ve met your minimum success criteria, analyze your data, and decide whether to pivot or persevere.

A pivot is a fundamental change in direction of your business model or product strategy. You face a pivot when your hypothesis has been invalidated (i.e., proven false).

At this early stage, you could expect a pivot in terms of a change to your target problems, your target customer, or both. This may mean going back to step 1 or 2.

However, if the results of your test prove (i.e., validate) your customer/problem hypothesis, you may decide to persevere, and move on to step 4.

4. Validate problem/solution fit.

Now that you’ve validated your customer/problem fit, you need to test whether your idea is a potentially viable solution to the customer’s problem.

There are many ways to go about this, but nothing really beats creating a visual representation of your envisioned solution in the form of a wireframe or mockup. To keep things simple and minimize work, I look to design a representational screen or flow for each discrete problem identified on my Product Canvas and validated via the previous steps.

The primary goal of this stage is to garner early customers who endorse your solution vision, and would be willing to use (and ideally pay for) an early version of the product. You’re also looking for directional feedback to identify the “right” handful of features to build for these early customers to prioritize your product development.

Formulate hypotheses around your screens, define your minimum success criteria, then reach back out to the customers you had interviewed earlier and demo the screens to them. Also try to mix in some new folks who fit your target customer demographic.

When done, just like you did at the end of step 3, analyze your data and decide whether to Pivot or Persevere. At this stage, you may pivot on the solution, the problem or even the customer. Or, if you’ve validated your hypothesis, you may have problem/solution fit, and can move on to step 5.

5. Validate your solution via an MVP.

There are many misconceptions about what constitutes an MVP, and I’ve written about these before. In short, an MVP is an actual product that attempts to deliver real value to customers.

It’s “minimum” in the sense that it’s an attempt to deliver the absolute necessary set of features or capabilities needed to solve the customer’s problem for which the customer will pay.

A primary outcome of the previous step is being able to understand your customers’ problems at a granular level, which helps prioritize the initial set of features to build. This drives the definition of your MVP.

Once again, formulate a set of testable hypotheses to ensure you’re continuing to drive your product development based on validated learning. Depending on the complexities of the problem and solution, and the nature of my target customer, I may opt to first demo the MVP before actually delivering it. The results of your MVP test will again determine whether to persevere or pivot.

If you’ve got this far, you’ve validated critical components of your product idea. It doesn’t mean your idea is guaranteed to succeed, but you will have gained a far better understanding of the market opportunity, allowing for a far less assumptive and much more robust business case for your new product idea.

Using Customer Development To Create The Business Case For Your Product Idea

Many of my product help calls are from folks frustrated with being able to pursue a new product in an existing company.

They complain about how difficult it is to secure resources and garner internal stakeholder buy-in.

If you find yourself in this position, congratulations!

As painful and frustrating as it is, many successful product people I’ve met have gone through exactly what you’re experiencing — including me!

The problem is the way we’ve been taught to pursue this — to first write a business case, business plan or MRD — just doesn’t work. The fact is, people think writing a business case is a waste of time and hate it.

And no where are we taught how to cultivate stakeholder support.

Like it or not, every project in an existing company, regardless of the size of the company, needs an internal champion or sponsor.

Lack of stakeholder buy-in can be the biggest impediment to your product no matter how good your product idea may be.

So it’s no surprise really that product people are frustrated.

 

So here’s a new process I’ve been following that’s worked much better for me:

Using Customer Development To Create The Business Case For My Product Idea

In a nutshell, I deferred asking for major dollars and resources until I absolutely needed to. I used a series of validated learning milestones to build momentum internally and to build the case for investing in my product idea.

Here’s what I did:

1. I quickly sketched out my product strategy on a 1-page Product Canvas. No wasting time writing a multi-page document no one is going to read.

2. I decomposed the product strategy into critical learning milestones meant to answer the most important questions in my product strategy:

  • How does our target customer describe the problem?
  • How are they solving it today?
  • Why is that solution not working for them? In other words, why is the problem still a pain for them?
  • How can we know before we invest a lot in development, sales and marketing that the solution we’re thinking of building really solves the problem?
  • How quickly can we get our first customer?
  • What are the most important features we need to have in our go-to-market product?

3. I figured out what’s the least amount of work I need to do to maximize my learning for each milestone.

4. I broke down my investment need into these milestones, showing how ROI could be tangibly achieved based on measurable results.

Here’s what the investment plan looked like:

validation_workflow

In the past, I would have asked to spend money on 3rd party market research (four to five figures), a design agency to craft the user experience (five to six figures — ugh!), a usability study (five figures), and a large development team (many figures).

This would be costly and take a long time before I would have delivered the product to a single customer.

And it’s a tough business case to make.

Instead, because I had broken down the plan into these learning milestones, I was able to easily accomplish the first two milestones by spending little to no money at all.

The first was simply my time, so required no money.

To validate our solution hypothesis, I used Balsamiq to sketch a handful of the key screens of our solution. Total cost was $79 for the tool and my time.

When we were ready to design the user experience, since we didn’t have an in-house designer, we commissioned a cracker-jack freelance designer — way cheaper than hiring an expensive design agency, and way faster. It got the job done.

(If you happen to have an in-house designer or design team, awesome — use them. Your investment “ask” may only be some of their time.)

In this way, I was able to use the resources I had at my disposal for as long as I could to create traction.

This approach not only allowed me to conserve precious funds and resources, but also allowed me to be less assumptive and more data-driven in identifying my investment ask at subsequent stages.

It also enabled me to not only build early traction with customers, but also have them help me define the minimum feature set we’d need to develop to go to market — labeled as the Minimum Sellable Product (MSP) in the picture above.

Here are the benefits that resulted:

  • Instead of writing a massive business case based largely on guesswork, I needed only to sell a bunch of mini-business cases. Way quicker and easier to do.
  • Each mini-business case was informed by the learnings from the previous stage, making each subsequent mini-business case better informed, more robust, and an easier sell.
  • The product strategy was informed by real market insights. (What a product manager needs to do anyway!)
  • I had a customer driven product roadmap that was tough for anyone to dispute as it was informed directly by tangible customer insights, which defined what went in our MVP vs. MSP vs. roadmap vs. nice-to-have.
  • This enabled our product development efforts to be more focused, as I had all the ammunition I needed to fend off arbitrary new feature requests that risked derailing our product development.
  • Because of our “co-innovation” approach with our customers, we were able to get “earlyvangelists” that we could leverage to generate momentum for our broader market launch. Customer Development in concert with Product Development!
  • All of this made it much easier for me to garner, maintain and accelerate buy-in from my internal stakeholders, because:
    1. My plan showed a clear milestone-based investment plan with the ROI to be gained at each phase.
    2. Smaller continual investments are easier to digest and support than a large upfront one.
    3. Each investment stage was grounded in real customer data, increasing confidence in pursuing the product.
    4. My stakeholders felt involved in the process, as I made sure to keep them informed and provide them an opportunity to provide feedback.
    5. This, in turn, kept me one step ahead of any potential concerns they may have had, and I could make sure to address them at a future stage.

How To Define An MVP: A Case Study

In my last post, I talked about how a minimum viable product (MVP) is not the smallest collection of features to be delivered. An MVP is basically an in-market experiment of a product idea that involves delivering real product to actual customers to get their feedback.

An MVP can be tested whether your idea is a brand new product or a new feature for an existing product.

And even if your product is software, your MVP doesn’t necessarily have to be software too.

Folks may be familiar with how Groupon started as a WordPress blog, called “The Daily Groupon”, on which the team posted daily discounts, restaurant gift certificates, concert vouchers, movie tickets, and other deals in Chicago area.

Food On The Table, a family meal planning and grocery shopping site eventually acquired by the Food Network, started by working with their customers individually, creating meal plans and shopping lists for them on spreadsheets and email, and then bought and delivered food items themselves.

So how do you go about defining an MVP for your product idea?

It starts with having a hypothesis for what features or capabilities you believe need to be delivered to your target customer in order to provide them value.

This is predicated on having done the hard upfront work of validating your customer’s problem (that it exists, it’s urgent, and pervasive), and then maybe even having tested a prototype of your solution vision.

If you feel you have a good enough understanding of your customer’s problem (pain point, job to be done, etc.), use that as a basis to identify what you believe are the must-have features for your MVP that are aligned with your solution vision.

Then test that MVP with real customers. Evaluate your results. Rinse and repeat.

To make this more tangible, here’s an example from my own experience.

For a product idea we had, we wanted to test our understanding of our customers’ top problems and get directional feedback on our solution approach. Directional feedback meant identifying the “right” handful of features to build first for early customers.

Based on some early customer conversations and market research, we developed a view of the problem domain. We sketched out our product vision on the Product CanvasTM, which allowed us to break down the problem domain into discrete problems and formulate testable falsifiable hypotheses around what we believed to be the top problems that our solution absolutely had to solve for first.

We built a clickable mockup defined by the key elements of our solution captured in our Product Canvas exercise. To keep things simple, we built a screen for each discrete problem to represent our solution vision — real html and css, in color, no lorem ipsum, with clickable interactions to represent the primary workflow through the screens.

We didn’t build out every interaction — just the main ones. We formulated a testable falsifiable hypothesis around the ability of each screen to solve a specific problem.

We then set up a number of customer interviews to test our problem hypotheses. During these customer conversations, we listened carefully to fully understand our customers’ world views and their current work flows, even noting the emotions in their voice and their body language (during in-person meetings, when we could do them) as they discussed their challenges and reacted to our screens.

We were deliberate and meticulous about documenting the results.

It turned out that while we had identified a viable problem domain, our view of what early customers considered as their chief problems was invalidated. We also learned that while our solution approach was generally in the right direction, there were features that we had not envisioned that early customers considered as must-have’s in the initial delivery.

As a massive bonus, we were actually able to garner a handful of very early customers who were willing to co-test the solution with us, further validating the fact that we had pricked a real pain point and were directionally correct in our solution approach.

As a primary outcome of this work we were able to understand our customers’ problem at a granular level, which helped prioritize the initial set of features to build. That drove the definition of the minimum viable product version of our solution.

And that’s what we did. We built just those features, and nothing else, and delivered it to those handful of early customers.

In fact, our first MVP wasn’t software. Our first MVP was more a concierge type service, sort of like what Food On The Table did — we “manually” delivered the service to each customer individually.

We learned a ton of really useful stuff. Things like what was really important to the customer, what features of the service they used more often than others, real insights into their workflow and how our solution could help improve it, and — crucially — what they were willing to pay for.

We used these learnings to then define a software MVP, and deliver it to early committed customers. The learnings from our “concierge” MVP experiment helped boost our confidence in defining the requirements for our software MVP. In other words, it was much less of a guess than it otherwise would have been.

We didn’t really bother with calling the software MVP a “release 1.0” or “version 1.0”, because that was irrelevant. We just focused on testing the solution until we received customer validation that it was truly providing value.

That gave us the confidence to know our product idea was “good to go” to scale up, put some real sales and marketing muscle behind it, and sell to more customers.

There’s no one way necessarily to approach an MVP. This is just one example of an approach. As Eric Ries states, defining an MVP is not formulaic: “It requires judgment to figure out, for any given context, what MVP makes sense.” Hopefully, this example gives you a template to define and test your own minimum viable product for your next great product idea.

I’ve created a handy primer on what is a minimum viable product. Download it below. I hope it helps you to become a pro at defining an MVP for your next great product idea!

 

An MVP Is Not The Smallest Collection Of Features You Can Deliver

Source: Spotify

Source: Spotify

There’s a lot of discussion and confusion about what is and isn’t a minimum viable product (MVP).

Worse, many execs have latched on to the term without really understanding what truly constitutes an MVP — many use it as a buzzword, and as a synonym to mean a completed version 1.0 ready to be sold to all customers.

Buzzwords are meaningless. They represent lazy thinking. And using “MVP” to mean “first market launch” or “first customer ship” means you’re back to the old waterfall, traditional project-driven software development, sales-focused approach. If that’s your approach, fine. Just don’t call what you’re delivering an MVP.

On the flip side, lots of folks in the enterprise world, including in product management, over-think the term. It gets lost in the clever nuances of market maturity, and a long entrenchment in the world of release dates and feature-based requirements thinking.

Many folks think of MVP as simply the smallest collection of features to deliver to customers. Wrong. It’s not.

The problem with that approach is it assumes we know ahead of time exactly what will satisfy customers. Even if we’ve served them for years, odds are when it comes to a new product or feature, we don’t.

Now, the challenge with the concept of a minimum viable product is it constitutes an entirely different way of thinking about our approach to product development.

It’s not about product delivery actually — in other words, it’s not about delivering product for the sake of delivering it or to hit a deadline.

An MVP is about validated learning.

As such, it puts customers’ problems squarely at the center, not our solution.

Reality check: Customers don’t care about your solution. They care about their problems. Your solution, while interesting, is irrelevant.

So if we’re going to use the term “MVP”, it’s important to understand what it really means.

Fortunately, all it takes to do that is to go back to the definition.


Download The Handy Primer “What Is An MVP?” >>


Minimum Viable Product (MVP) is a term coined by Eric Ries as part of his Lean Startup methodology, which lays out a framework for pursuing a startup in particular, and product innovation more generally. This means we need to understand the methodology of Lean Startup to have the right context for using terms like “MVP”. (Just like we shouldn’t use “product backlog” from Agile as a synonym for “dumping ground for all possible feature ideas”.)

Eric lays out a definition for what is an MVP:

“The minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.”

Eric goes on to explain exactly what he means (emphasis mine):

MVP, despite the name, is not about creating minimal products… In fact, MVP is quite annoying, because it imposes extra overhead. We have to manage to learn something from our first product iteration. In a lot of cases, this requires a lot of energy invested in talking to customers or metrics and analytics.

Second, the definition’s use of the words maximum and minimum means an MVP is decidedly not formulaic. It requires judgment to figure out, for any given context, what MVP makes sense.

Let’s break this down.

1. An MVP is a product. This means it must be something delivered to customers that they can use.

There’s a lot that’s been written about creating landing pages, mockups, prototypes, doing smoke tests, etc., and considering them as forms of MVPs. While these are undoubtedly worthwhile, and certainly “lean”, efforts to gain valuable learnings, they are not products. Read Ramli John‘s excellent post on “A Landing Page Is NOT A Minimum Viable Product“.

A product must attempt to deliver real value to customers. So a minimum viable product is an attempt — an experiment — to deliver real value to customer.

Which leads us to…

2. An MVP is viable. This means it must try to tangibly solve real world and urgent problems faced by your target customers. An MVP must attempt to deliver value.

So it’s not about figuring out the smallest collection of features. It’s about making sure we’ve understood our customers’ top problems, and figuring out how to deliver a solution to those problems in a way that early customers are willing to “pay” for. (“Pay” in quotes as it depends on your business model.)

If we can’t viably solve early customers’ primary problems, everything else is moot. That is why an MVP is about validated learning.

3. An MVP is the minimum version of your product vision. A few years ago, I had to build an online form builder app that would allow customers to create online payment forms without the need to write any HTML or worry about connecting to a payment gateway. Before having our developers write a single line of code to build the product, we first offered customers the capability as a service: we would get their specs, and then manually build and deliver each online payment form one-by-one, customer-by-customer. Customers would pay us for this service.

This “concierge” type service was our MVP version of our product vision. Of course, it wasn’t scalable. But we learned a heck of a lot: most common types of payment forms they wanted, what was most important to them in a form, frequency of wanting to make changes, reporting needs, and how they perceived the value of the service.

We parlayed these learnings into developing the software app itself — which, by the way, we delivered as an MVP to early customers to whom we had pre-sold the software product. (Yes, we delivered two different types of MVPs!)

Whether you take a “concierge” approach or your MVP is actual code, it most definitely does NOT mean it’s a half-baked or buggy product. (Remember viable from above?)

It DOES mean critically thinking through the absolute necessary features your product will need day 1 to solve your early customers’ top problems, focusing on delivering those first, and putting everything else on the backlog for the time being. It also means being very deliberate about finding those “earlyvangelists” that Steve Blank always talks about.

Ultimately, the key here is “maximum amount of validated learning”. This means being systematic about identifying your riskiest assumptions, formulating testable falsifiable hypotheses around these, and using an MVP — a minimum viable product version of your product vision — to prove or disprove your hypotheses.

Now, validated learning can certainly be accomplished via a landing page, mockup, wireframes, etc. And it may make sense to do these things. Super. But don’t call them MVPs, because while they may deliver value to you and your product idea, they’re not delivering actual value to the customer.

At the same time, the traditional product management exercise of identifying all the features of a product, force ranking them, and then drawing a line through the list to identify the smallest collection to be delivered by a given timeframe is not an MVP. Why? Because this approach is not predicated on maximizing validated learning. If you’re going to pursue this approach, go ahead and call it Release 1.0, Version 1.0, “Beta”, whatever. But don’t call it an MVP.

An MVP is about not just the solution we’re delivering, but also the approach. The key is maximizing validated learning.

I’ve created a handy primer on what is a minimum viable product. Download it below. I hope it helps you to become a pro at defining an MVP for your next great product idea!


Download The Handy Primer “What Is An MVP?” >>


How to Identify (and Mitigate) the Riskiest Parts of Your Product Strategy

Any product strategy is fraught with risks.

Three of the biggest risks to a startup are tech risk, market risk, and ego risk. Corporate innovation faces additional risks: resource risk (resources need to be assigned), implementation risk (need the right implementation skill sets and tools), operational risk (the product needs to be operationally cost-effective) and internal risk (need buy-in and alignment from internal stakeholders).

Identifying these risks and de-risking them are crucial to the success of any product strategy. One of the most compelling things to me about Lean Startup is the focus on systematically de-risking elements of a product innovation through experiments and Validated Learning — one of the five core principles of Lean Startup.

Of course, this is predicated on identifying each of the most essential elements of your product vision. The Product Canvas has been great in helping me do just that. Its 1-page format facilitates having important conversations with my partners and stakeholders to gather their feedback.

“[The Product Canvas] is a smart way for each product manager to have a succinct snapshot of what it means to ‘be’ a product. It is a great way to focus and present to others the critical elements of a product.”

Having conversations with my internal partners are critical to helping me uncover risks and assumptions that I may not have thought of.

As I started doing this, the question became how to capture these risks, track progress in de-risking them, and communicate back that progress?

Here, again, is where I found the principle of Innovation Accounting from Lean Startup appealing, which Eric Ries describes it in his book:

To improve entrepreneurial outcomes, and to hold entrepreneurs accountable, we need to focus on the boring stuff: how to measure progress, how to setup milestones, how to prioritize work. This requires a new kind of accounting, specific to startups.

In other words, Innovation Accounting provides a framework to measure and communicate progress. The unit of progress is a learning milestone, “an alternative to traditional business and product milestones.” (From the book.)

This last part really appeals to me. Developing a grounded, workable product strategy cannot be moved forward by a date-driven approach, and I’ve seen too many new product development efforts descend into chaos, and even outright failure, by the traditional project management process.

Again, from Eric’s book:

“Learning milestones are useful for entrepreneurs as a way of assessing their progress accurately and objectively; they are also invaluable to managers and investors who must hold entrepreneurs accountable.”

I’d argue we could replace “entrepreneur” with “product manager” or “product innovator”, and “investors” with “executives”.

But again, the question is how to actually do this in practice. Let’s go back to the issue of capturing and tracking assumptions and risks.

At first, I used stickies:

pmc_hypotheses_1_lowrez

That worked great as a start. Especially for brainstorming or a live update if a colleague or stakeholder was in the room with me.

But not so much for tracking ongoing progress. Plus, translating all that into an update report to share with someone not in the room is just too much work, quite frankly.

I needed something that doubled up as a tool to use and a way to communicate progress, similar to the Product Canvas.

Then I came across Ash Maurya’s blog posts on his Lean Stack approach to doing Innovation Accounting. I liked how he’s mapped the Build-Measure-Lean cycle into a Kanban style approach to track his progress.

After experimenting with his approach, I developed a version that worked for me as a product manager. I’ll explain via a made up, yet tangible, example.

Below is an initial Product Canvas for an online bill payment app that allows customers of a bank to view all their bills in one place and pay directly through their bank account.

billpay-product-canvas

Now, as product manager, I should be intimate with my customer base. So let’s say their input was the genesis for this product — e.g., lots of customer requests asking to enhance the existing bill pay service with the ability to view and pay utility bills.

As such, initially I may not see my Customer Segment or Problems as the highest risks. But I do need to identify my early adopters.

In other words, I feel confident in the initial demand, but I’m not certain which of my customers will be most likely to switch their behavior to paying all their bills through our bank. (After all, people don’t always do what they say.) So I highlight this as a risk.

After speaking with my stakeholders, I identified numerous additional risks, which I highlight via PowerPoint’s comments feature:

billpay-product-canvas-with-comments1

All I need to do is click on any comment to view the details:

billpay-product-canvas-with-open-comments

Now, to track and communicate progress on how risks are being addressed, I use a Kanban style board similar to what Ash uses, called the Validated Learning Board.

billpay-validated-learning-board

Risks and assumptions are placed in the Backlog column. When I begin working on a particular risk card, I move the card to the IN PROGRESS section. I place a blue card under the Build column to note the experiment I’m conducting. On the card, I note the experiment that I’m running and the falsifiable hypothesis of my experiment.

If an experiment serves to tackle more than one risk, no problem. You’ll see an example in the image above representing that.

Once I start the experiment (e.g., interview the first customer, or day 1 of user testing, etc.), I move the card to the Measure column.

Once the experiment is over, I move the risk card to the Learn/DONE column, and color code it green if the assumption has been validated or risk de-risked, and red if not.

If I’m running multiple experiments simultaneously, I separate them with a line.

billpay-validated-learning-board-2

I don’t capture all the details of my experiment on the cards. This is meant for a high-level progress view. Details of the experiment can be presented on its own slide or report.

Finally, I need to make sure I’m systematically identifying the right set of internal stakeholders and capturing their feedback. I covered that in my blog post on Stakeholder Development, in which I talked about the Stakeholder Development Tracker.

As I continue to get feedback from both the experiments and further internal conversations, I use these learnings to update the product strategy represented in the Product Canvas.

A team can use these artifacts to track progress on, say, the wall of an agile room, while also quickly converting them into 3 quick slides to provide a high-level update to anyone on the progress of the product strategy.

Why It’s Better To Be Smaller When Implementing Agile In A Large Company

Having done many waterfall projects, I was recently part of an effort to move a large organization to an agile software delivery process after years of following waterfall. I’ll be blunt: it was downright painful. That said, I’d pick agile over the mutlti-staged, paper intensive, meeting heavy, PMO driven waterfall process I encountered when I joined the organization.

Although the shift was painful, it was a terrific educational experience. Based on lessons learned, we adopted certain principles to guide our approach to implementing agile in the organization.

Dream big. Think smaller.

This means having a vision for what the solution will look like and the benefits it will provide customers, but then boiling it down to specifics to be able to execute. For example, at one of my former gigs, we had identified the need to make improvements to our online payments process, and captured over 20 different enhancements on a single slide under the title of “Payment Enhancements”. (Yes, in very tiny font, like 8-point.) Those enhancements were beyond simple things like improving copy or the layout of elements. Each enhancement would have involved material impacts to back-end processes. As such, “Payment Enhancements” is not an epic, or at least, it’s a super big and super nebulous one that cannot be measured. Rather, I argued that each bullet on that 1-pager could be considered an epic in and of itself that could be placed on the roadmap and would need to be further broken down into stories for execution purposes.

Thinking smaller also means considering launching the capability to a smaller subset of customers. Even when pursuing an enhancement to an existing product, it’s important to ask whether the enhancement will truly benefit all customers using your product or whether it needs to be made available to all customers on day 1. Benefits of identifying an early adopter segment: (1) get code out faster, (2) lower customer impact, (3) get customer feedback sooner that can be acted on.

Be sharp and ruthless about defining the MVP.

Lean Startup defines MVP (Minimum Viable Product) as “that version of the product that allows the team to collect the maximum amount of validated learning from customers”.

(We think) we know the problem. We don’t know for certain the solution. We have only a vision and point-of-view on what it could be. We will only know for certain we have a viable solution when customers tell us so because they use it. So identify what are the top customer problems we’re trying to solve, the underlying assumptions in our proposed solution, and what we really need to learn from our customers. Then formulate testable hypotheses and use that to define our MVP.

Make validated learning the measure

In the war of SDLCs, I’m no blanket waterfall basher nor true believer of agile. But from having done a number of waterfall projects I’ve observed that it’s typically been managed by what I call “management by date”, or more often than not, make-believe date.

As human beings, we like certainty. A date is certain. So setting a date is something that we feel can be measured, in part because a date feels real, it gives us a target, and in part probably because over decades we’ve become so accustomed to using date-driven project management to drive our product development efforts. The problem becomes that this gets us into the classic scope-time-budget headache, which means we’re now using those elements as the measure of our progress.

The thing is, scope, time and budget mean absolutely nothing to the customer. What really matters is whether customers find value in the solution we are trying to provide them. Traditional product development and project management practices don’t allow us to measure that until product launch, by which time it may be too late.

So we need to make learning the primary goal, not simply hitting a release date, which is really a check-the-box exercise and means nothing. Nothing beats direct customer feedback. We don’t know what the solution is until customers can get their hands on it. So instead of working like crazy to hit a release date, work like crazy to get customer validation. That allows us to validate our solution (MVP) and pivot as necessary.

Focus always, always on delivering a great user experience

Better to have less functionality that delivers a resonating experience than more that compromises usability. A poor UX directly impacts the value proposition of our solution. We need look no further than Apple’s stumble on the iPhone 5 Maps app. (Ironic.)

Continuous deployment applies not just to agile delivery, but also the roadmap

Over four years ago, Saeed Khan posted a nice piece on roadmaps where he said:

A roadmap is a planned future, laid out in broad strokes — i.e. planned or proposed product releases, listing high level functionality or release themes, laid out in rough timeframes — usually the target calendar or fiscal quarter — for a period usually extending for 2 or 3 significant feature releases into the future.

The roadmap is just that: a high-level map to achieve a vision. Not a calendar of arbitrary dates to hit. Too many roadmaps seem to suffer from the same date-driven project management approach.

For most established software products, I typically advocate having at least a 12-month roadmap that communicates the direction to be taken to achieve the vision and big business goals. It identifies targeted epics to achieve that vision. The vision is boiled down to a more tangible 3-month roadmap. That’s the stuff we want to get done in the next 3 months and what the agile teams need to work on.

Create an accountable person or body that actively looks at the roadmap on a monthly and quarterly basis. On a monthly basis, this body helps the agile Product Owner(s) prioritize the backlog against the 3-month roadmap. On a quarterly basis, this body evaluates overall progress against the 12-month roadmap. As such, 12 months is a rolling period, not an annual calendar of unsubstantiated promises of delivery.

What has your experience been implementing agile in your organization? What principles does your organization follow in executing an agile process?