Category Archives: Agile

RIP PRDs. Long Live “Agile Conversations”

I don’t write PRDs.

Product Requirements Documents.

They take too long to write and are typically outdated by the time they’re “finished”, which, it turns out, they never are.

They’re never finished because the market changes. Customers’ needs evolve. Competitors change conditions. New technology changes everything.

Worse, they typically end up creating even more documentation.

Years ago, I worked in a company that followed a pretty typical waterfall process. Product Managers produced PRDs. (MS Word docs.) Big, multi-page docs that contained sections like “Product Overview”, “Business Objectives”, “Features”, “Personas”, “Use Cases and User Scenarios”, and a detailed itemized list of “Functional Requirements”.

Then Business Analysts translated them to “Functional Specifications”. (A bigger MS Word doc.) And then a System Designer translated those to “Technical Design Specifications”. (Yet another even bigger MS Word doc.)

Crikey, we weren’t building an aircraft carrier! We were just trying to build a software app.

Imagine if something had to change in the PRD? (Which it always did, of course.) That change had to get propagated down to each document.

More writing…

But here’s the funny thing: these documents never seemed to reduce the need for the humans involved in the project to talk to each other.

Each of these documents were typically followed by meetings. Meetings to discuss the very documents that were produced. To go over them page by page, line by line.

Going over such a large document takes time. So we’d schedule 3-hour, 4-hour, 6-hour meetings. (No kidding.)

But no one likes long meetings. So after a minor outcry, these meetings were reduced to 1.5 to 2 hours tops. The result was the poor Product Manager (or Business Analyst or System Designer) was now pressured to run through the same big document in less time.

Of course, it didn’t mean people had fewer questions, though!

So inevitably we’d run out of time and have to schedule a follow-on meeting to get through the rest of the document and answer questions we weren’t able to get to.

It’s not easy to coordinate the schedules of so many busy people…

Net was the original 3 or 4-hour meeting was now spread across multiple sessions that took place over several days or even weeks, and we ended up spending more time in total.

More time in writing and discussing documentation.

This could take weeks. Often, months.

And don’t get me started on the “change request” process!

And with so much time spent in writing and discussing the documentation, you’d think the output — the actual product that was delivered — was built as expected.

Nope.

It was amazing how often, despite all the effort invested in documentations and meetings, some particular piece of functionality wasn’t delivered as originally envisioned.

So often it would be due to some disconnect between the various documents everyone was trying so hard to keep synchronized.

I swear, it would make me want to rip my hair out (of which I seem to have less of with age!).

We spent so much time producing, coordinating, verifying, validating, and CYA-ing the documentation that we didn’t do the thing that was most important:

Delivering value to customers as quickly as possible.

Our time is better spent interacting with customers, testing solutions delivering product to them, getting their feedback, and acting on that feedback as quickly as possible.

After going through this one too many times, I vowed never to do it again. That’s when I began experimenting not only with agile software development practices, but customer development and other lean strategies to test, validate and iterate on the business planning aspects of delivering software and digital products and services.

Over time, I’ve developed a more lightweight, human-friendly, and — yes — agile approach to testing, building and launching products. One that has a proclivity toward action and conversations, is fueled by customer centricity, and is predicated on delivering business value and speed-to-market.

To be honest, this approach isn’t particularly revolutionary, and I certainly can’t lay claim to having invented any aspect of it. It’s all pretty much what’s covered in the manifesto for agile software development, and the good news is these practices are being increasingly adopted in different companies and industries for different types of software products and digital initiatives.

And yet, every week I get an email like this:

“As you preach, long MRD/PRD’s make no sense. However, there is some need to provide our business analyst with some form of requirement. What do you do in these cases? I have a Feature Requirements doc I created, but I sometimes feel it is overkill. What would you recommend?”

People are increasingly aware of agile and lean practices. The agile manifesto is in the public domain. There are books on Lean Startup and customer development practices. But what this email (and countless others I receive) shows is that it’s one thing to be aware of a thing, and it’s another to actually put it in practice.

In fairness, that’s understandable. For one, people have different learning curves, in aptitude and even willingness to change. Old habits can die hard. There are also different environments, organizational structures and cultures to consider. Innovating in a startup is different than innovating in a global enterprise company. And then there’s special snowflake syndrome.

I’m on a mission to help as many of these product managers as I can. In my reply to this product manager, I shared the specific practices, activities and tools my teams and I use to replace the overblown PRD process. And I want to share them with you.

Here’s my reply (almost entirely copy-pasted here):

  • I no longer write MRDs, PRDs, or any sort of traditional functional spec. Haven’t written one in years. I don’t have my teams write them either. They’re a total waste of time.
  • If we have time, budget and bandwidth, we’ll always get a Designer to create the screens before getting it to Engineering. Doing otherwise is a waste of Engineering’s time. Only caveat is I keep my Engineering Head / Chief Architect informed and involved.
  • Since the Designer is the initial recipient of the stories, we can write them at a fairly high level. We don’t get hung up too much on granular acceptance criteria — the use case, user goal or job story is much more important at this stage.
  • As such, at times we’ve provided just a 1-pager with bullets. Because design is iterative, we lean more heavily on interactive ongoing conversations than documentation.
  • If we don’t have time, budget or bandwidth for a Designer, PM just hacks the screens — Balsamiq, ppt, whatever. Not ideal, but sometimes you just have to make do. I’ve literally sent a photo of a whiteboard doodle, and written the story around it.
  • As much as possible, for major new features or flows, we will do customer validation. Yep: phone interviews with screen shares. 5 may be good enough.
  • We don’t have a Business Analyst, so PM writes the stories directly, including me. It’s not hard, and removes a middle man. Not saying a BA/BSA is never needed — just saying in our case we haven’t had the need for one, and have managed fine.
  • When ready to provide the stories to Engineering, we do write more granular level stories. Granularity is determined by the detail put into the screens. Let’s say we had a designer mockup every detail — every click, button placement, font, color, and even copy —then it’s just easier to add the mockup as support documentation and point to it in the acceptance criteria. If the design was more high-level, like a ppt hack or my silly whiteboard doodle, naturally I need to provide more specificity.
  • We use Aha.io to capture ideas, convert them to features, write stories, attach supporting artifacts, do prioritization, and map out an executable roadmap. It has some super cool features:
    • It will spit out a req doc from the stories you write. This is helpful if you’re using a 3rd party dev firm and need to provide them a written doc during the initial planning stages. So we no longer need to use MS Word.
    • It has an awesome Jira integration feature: I literally click a button and — boom — everything is automatically placed in the Jira backlog.
    • It allows me to publish out status on our roadmap execution to senior execs. This is extremely helpful, as it gets me away from PowerPoint and Excel crap.

Ultimately what matters most is joint understanding between Product Management and Engineering (and Design, if you have it). If there’s a good relationship, all sides can come to an agreement on how the reqs should be delivered. And remember: regardless of how the reqs are documented, ongoing conversations between PM and Engineering is the key.


Are you still writing PRDs/MRDs? Or have you moved on to more effective processes? Share in your experience in the comments below.

An MVP Is Not The Smallest Collection Of Features You Can Deliver

Source: Spotify

Source: Spotify

There’s a lot of discussion and confusion about what is and isn’t a minimum viable product (MVP).

Worse, many execs have latched on to the term without really understanding what truly constitutes an MVP — many use it as a buzzword, and as a synonym to mean a completed version 1.0 ready to be sold to all customers.

Buzzwords are meaningless. They represent lazy thinking. And using “MVP” to mean “first market launch” or “first customer ship” means you’re back to the old waterfall, traditional project-driven software development, sales-focused approach. If that’s your approach, fine. Just don’t call what you’re delivering an MVP.

On the flip side, lots of folks in the enterprise world, including in product management, over-think the term. It gets lost in the clever nuances of market maturity, and a long entrenchment in the world of release dates and feature-based requirements thinking.

Many folks think of MVP as simply the smallest collection of features to deliver to customers. Wrong. It’s not.

The problem with that approach is it assumes we know ahead of time exactly what will satisfy customers. Even if we’ve served them for years, odds are when it comes to a new product or feature, we don’t.

Now, the challenge with the concept of a minimum viable product is it constitutes an entirely different way of thinking about our approach to product development.

It’s not about product delivery actually — in other words, it’s not about delivering product for the sake of delivering it or to hit a deadline.

An MVP is about validated learning.

As such, it puts customers’ problems squarely at the center, not our solution.

Reality check: Customers don’t care about your solution. They care about their problems. Your solution, while interesting, is irrelevant.

So if we’re going to use the term “MVP”, it’s important to understand what it really means.

Fortunately, all it takes to do that is to go back to the definition.


Download The Handy Primer “What Is An MVP?” >>


Minimum Viable Product (MVP) is a term coined by Eric Ries as part of his Lean Startup methodology, which lays out a framework for pursuing a startup in particular, and product innovation more generally. This means we need to understand the methodology of Lean Startup to have the right context for using terms like “MVP”. (Just like we shouldn’t use “product backlog” from Agile as a synonym for “dumping ground for all possible feature ideas”.)

Eric lays out a definition for what is an MVP:

“The minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.”

Eric goes on to explain exactly what he means (emphasis mine):

MVP, despite the name, is not about creating minimal products… In fact, MVP is quite annoying, because it imposes extra overhead. We have to manage to learn something from our first product iteration. In a lot of cases, this requires a lot of energy invested in talking to customers or metrics and analytics.

Second, the definition’s use of the words maximum and minimum means an MVP is decidedly not formulaic. It requires judgment to figure out, for any given context, what MVP makes sense.

Let’s break this down.

1. An MVP is a product. This means it must be something delivered to customers that they can use.

There’s a lot that’s been written about creating landing pages, mockups, prototypes, doing smoke tests, etc., and considering them as forms of MVPs. While these are undoubtedly worthwhile, and certainly “lean”, efforts to gain valuable learnings, they are not products. Read Ramli John‘s excellent post on “A Landing Page Is NOT A Minimum Viable Product“.

A product must attempt to deliver real value to customers. So a minimum viable product is an attempt — an experiment — to deliver real value to customer.

Which leads us to…

2. An MVP is viable. This means it must try to tangibly solve real world and urgent problems faced by your target customers. An MVP must attempt to deliver value.

So it’s not about figuring out the smallest collection of features. It’s about making sure we’ve understood our customers’ top problems, and figuring out how to deliver a solution to those problems in a way that early customers are willing to “pay” for. (“Pay” in quotes as it depends on your business model.)

If we can’t viably solve early customers’ primary problems, everything else is moot. That is why an MVP is about validated learning.

3. An MVP is the minimum version of your product vision. A few years ago, I had to build an online form builder app that would allow customers to create online payment forms without the need to write any HTML or worry about connecting to a payment gateway. Before having our developers write a single line of code to build the product, we first offered customers the capability as a service: we would get their specs, and then manually build and deliver each online payment form one-by-one, customer-by-customer. Customers would pay us for this service.

This “concierge” type service was our MVP version of our product vision. Of course, it wasn’t scalable. But we learned a heck of a lot: most common types of payment forms they wanted, what was most important to them in a form, frequency of wanting to make changes, reporting needs, and how they perceived the value of the service.

We parlayed these learnings into developing the software app itself — which, by the way, we delivered as an MVP to early customers to whom we had pre-sold the software product. (Yes, we delivered two different types of MVPs!)

Whether you take a “concierge” approach or your MVP is actual code, it most definitely does NOT mean it’s a half-baked or buggy product. (Remember viable from above?)

It DOES mean critically thinking through the absolute necessary features your product will need day 1 to solve your early customers’ top problems, focusing on delivering those first, and putting everything else on the backlog for the time being. It also means being very deliberate about finding those “earlyvangelists” that Steve Blank always talks about.

Ultimately, the key here is “maximum amount of validated learning”. This means being systematic about identifying your riskiest assumptions, formulating testable falsifiable hypotheses around these, and using an MVP — a minimum viable product version of your product vision — to prove or disprove your hypotheses.

Now, validated learning can certainly be accomplished via a landing page, mockup, wireframes, etc. And it may make sense to do these things. Super. But don’t call them MVPs, because while they may deliver value to you and your product idea, they’re not delivering actual value to the customer.

At the same time, the traditional product management exercise of identifying all the features of a product, force ranking them, and then drawing a line through the list to identify the smallest collection to be delivered by a given timeframe is not an MVP. Why? Because this approach is not predicated on maximizing validated learning. If you’re going to pursue this approach, go ahead and call it Release 1.0, Version 1.0, “Beta”, whatever. But don’t call it an MVP.

An MVP is about not just the solution we’re delivering, but also the approach. The key is maximizing validated learning.

I’ve created a handy primer on what is a minimum viable product. Download it below. I hope it helps you to become a pro at defining an MVP for your next great product idea!


Download The Handy Primer “What Is An MVP?” >>


Applying Lean Techniques in a Big Company: The “Hothouse”

Large companies are always trying to find ways to move at “startup speed” or “digital speed”. The challenge often times is quite simply people: there are just too many of them to keep aligned in order to move swiftly. For any initiative, there are usually multiple stakeholders and affected parties. That means multiple opinions and feedback cycles.

It’s easy to say decision making should be centralized, but the reality is it’s much harder to execute in practice. Even a 1,000-person company has multiple departments, and bigger companies often have sub-departments. If I’m driving a new initiative that will materially impact the customer and/or the business, the fact of the matter is I need to ensure I’m actively coordinating with many of these groups throughout the process, like marketing, operations, call centers, brand, legal, IT/engineering, design, etc. That means not only coordinating with those departments heads (usually VPs), but also their lieutenants (usually Directors) who are responsible for execution within their domains and control key resources, and thus have a material influence over the decisions of the department heads.

In addition, large companies are often crippled by their own processes. Stage-Gate type implementations can be particularly notorious for slowing things down with the plethora of paperwork and approvals they tend to involve.

All of this means tons of emails, socialization meetings, documentation, and needless deck work. All of which is a form of waste, because it prevents true forward progress in terms of driving decision making, developing solutions, and getting them to market.

Initiatives involving UI are particularly susceptible to this sort of corporate bureaucracy for the simple reason that UI is visual and therefore easy to react to, and everyone feels entitled to opine on the user experience. Once, one of my product managers spent a month trying to get feedback and approvals from a handful of senior stakeholders on the UX direction for his initiative. A month! Why did it take so long? For the simple and very real reason that it was difficult to get all these senior leaders in the same room at the same time. What a waste of time!

So how to solve for this? Several years ago, my colleagues and I faced this exact challenge. While an overhaul of the entire SDLC was needed, that takes time in a large organization. What we needed was something that could accelerate decision making, involve both senior stakeholders and the project team, yet be implemented quickly. That’s when we hit upon our true insight: What we needed was to apply lean thinking to the process of gaining consensus on key business decisions.

And that’s how we adopted an agile innovation process that we called a “Hothouse.” A Hothouse combines the Build-Measure-Learn loop from Lean Startup with design thinking principles and agile development into a 2- to 3-day workshop in which senior business leaders work with product development teams through a series of iterative sprints to solve key business problems.

That’s a mouthful. Let’s break it down.

The Hothouse takes place typically over 2 or 3 days. One to three small Sprint teams are assembled to work on specific business problems throughout those 2-3 days in a series of Creative Sprints that are typically about 3 hours each. (4 hours max, 2.5 hours minimum.) Between each Creative Sprint is a Design Review in which the teams present the deliverables from their last sprint to senior leaders, who provide constructive, actionable feedback to the teams. The teams take this feedback into the next Creative Sprint.

This iterative workflow of Creative Sprints and Design Reviews follows the Build-Measure-Learn meta pattern from Lean Startup:

Hothouse Build-Measure-Learn

During the Creative Sprints, teams pursue the work using design thinking and agile principles, like reframing the business challenge, collaborative working between business and developers, ideation, user-centric thinking, using synthesis for problem solving, rapid prototyping, iterative and continuous delivery, face-to-face conversation as the primary means of communication, and team retrospections at the start of each sprint.

The Hothouse is used to accelerate solution development for a small handful of key business problems. So job #1 is to determine the specific business problems you want to solve in the Hothouse. The fewer, the better, as a narrow scope allows for more focused, efficient and faster solution development. For each business problem, teams bring in supporting material, such as existing customer research, current state user experience, business requirements, prototypes, architectural maps, etc. as inputs into the Hothouse. The expected outputs from the Hothouse depend on the business problems being addressed and the specific goals of the Hothouse, but can take the form of a refined and approved prototype, prioritized business requirements or stories, system impacts assessment, high-level delivery estimates, and even a marketing communication plan.

Hothouse process

At the end of the Hothouse, the accepted outputs becomes the foundation for further development post-Hothouse.

I’ve been part of numerous Hothouses, both as a participant and facilitator, and I’ve seen Hothouses applied to solve business challenges of varying scope and scale. For example:

  • Re-design of a web page or landing page.
  • Design of a user flow through an application.
  • Development of specific online capabilities, such as online registration and customer onboarding.
  • A complex re-platforming project involving migration from an old system to a new one with considerations for customer and business impacts.
  • An acquisition between F1000 companies.

The benefits to a large organization are manifold:

  • Accelerates decision making. What typically takes weeks or months is completed in days.
  • Senior leadership involvement means immediate feedback and direction for the project team.
  • Ensures alignment across all stakeholders and teams. The folks most directly impacted — senior leadership and the project delivery team — are fully represented. By the end of the Hothouse, everyone is on the same page on everything: the business problems and goals, proposed solutions, high-level system impacts, potential delivery trade-offs, priorities, and next steps.
  • This alignment serves as a much-needed baseline for the project post-Hothouse.
  • Bottom-line is faster product definition and solution development, which speeds delivery time-to-market.

A Hothouse can help you generate innovative solutions to your organization’s current problems, faster and cheaper. More details here. If you’re interested in learning more about agile innovation processes like the Hothouse, or how to implement one at your organization, reach out to me via Twitter or LinkedIn.

Disclaimer: I didn’t come up with the term Hothouse. I don’t know who did. But it’s a name we used internally, and it stuck. I think the original methodology comes from the UK, but I’m not sure. If you happen to know if the name is trademarked, please let me know and I’ll be happy to add the credit to this post.

Why It’s Better To Be Smaller When Implementing Agile In A Large Company

Having done many waterfall projects, I was recently part of an effort to move a large organization to an agile software delivery process after years of following waterfall. I’ll be blunt: it was downright painful. That said, I’d pick agile over the mutlti-staged, paper intensive, meeting heavy, PMO driven waterfall process I encountered when I joined the organization.

Although the shift was painful, it was a terrific educational experience. Based on lessons learned, we adopted certain principles to guide our approach to implementing agile in the organization.

Dream big. Think smaller.

This means having a vision for what the solution will look like and the benefits it will provide customers, but then boiling it down to specifics to be able to execute. For example, at one of my former gigs, we had identified the need to make improvements to our online payments process, and captured over 20 different enhancements on a single slide under the title of “Payment Enhancements”. (Yes, in very tiny font, like 8-point.) Those enhancements were beyond simple things like improving copy or the layout of elements. Each enhancement would have involved material impacts to back-end processes. As such, “Payment Enhancements” is not an epic, or at least, it’s a super big and super nebulous one that cannot be measured. Rather, I argued that each bullet on that 1-pager could be considered an epic in and of itself that could be placed on the roadmap and would need to be further broken down into stories for execution purposes.

Thinking smaller also means considering launching the capability to a smaller subset of customers. Even when pursuing an enhancement to an existing product, it’s important to ask whether the enhancement will truly benefit all customers using your product or whether it needs to be made available to all customers on day 1. Benefits of identifying an early adopter segment: (1) get code out faster, (2) lower customer impact, (3) get customer feedback sooner that can be acted on.

Be sharp and ruthless about defining the MVP.

Lean Startup defines MVP (Minimum Viable Product) as “that version of the product that allows the team to collect the maximum amount of validated learning from customers”.

(We think) we know the problem. We don’t know for certain the solution. We have only a vision and point-of-view on what it could be. We will only know for certain we have a viable solution when customers tell us so because they use it. So identify what are the top customer problems we’re trying to solve, the underlying assumptions in our proposed solution, and what we really need to learn from our customers. Then formulate testable hypotheses and use that to define our MVP.

Make validated learning the measure

In the war of SDLCs, I’m no blanket waterfall basher nor true believer of agile. But from having done a number of waterfall projects I’ve observed that it’s typically been managed by what I call “management by date”, or more often than not, make-believe date.

As human beings, we like certainty. A date is certain. So setting a date is something that we feel can be measured, in part because a date feels real, it gives us a target, and in part probably because over decades we’ve become so accustomed to using date-driven project management to drive our product development efforts. The problem becomes that this gets us into the classic scope-time-budget headache, which means we’re now using those elements as the measure of our progress.

The thing is, scope, time and budget mean absolutely nothing to the customer. What really matters is whether customers find value in the solution we are trying to provide them. Traditional product development and project management practices don’t allow us to measure that until product launch, by which time it may be too late.

So we need to make learning the primary goal, not simply hitting a release date, which is really a check-the-box exercise and means nothing. Nothing beats direct customer feedback. We don’t know what the solution is until customers can get their hands on it. So instead of working like crazy to hit a release date, work like crazy to get customer validation. That allows us to validate our solution (MVP) and pivot as necessary.

Focus always, always on delivering a great user experience

Better to have less functionality that delivers a resonating experience than more that compromises usability. A poor UX directly impacts the value proposition of our solution. We need look no further than Apple’s stumble on the iPhone 5 Maps app. (Ironic.)

Continuous deployment applies not just to agile delivery, but also the roadmap

Over four years ago, Saeed Khan posted a nice piece on roadmaps where he said:

A roadmap is a planned future, laid out in broad strokes — i.e. planned or proposed product releases, listing high level functionality or release themes, laid out in rough timeframes — usually the target calendar or fiscal quarter — for a period usually extending for 2 or 3 significant feature releases into the future.

The roadmap is just that: a high-level map to achieve a vision. Not a calendar of arbitrary dates to hit. Too many roadmaps seem to suffer from the same date-driven project management approach.

For most established software products, I typically advocate having at least a 12-month roadmap that communicates the direction to be taken to achieve the vision and big business goals. It identifies targeted epics to achieve that vision. The vision is boiled down to a more tangible 3-month roadmap. That’s the stuff we want to get done in the next 3 months and what the agile teams need to work on.

Create an accountable person or body that actively looks at the roadmap on a monthly and quarterly basis. On a monthly basis, this body helps the agile Product Owner(s) prioritize the backlog against the 3-month roadmap. On a quarterly basis, this body evaluates overall progress against the 12-month roadmap. As such, 12 months is a rolling period, not an annual calendar of unsubstantiated promises of delivery.

What has your experience been implementing agile in your organization? What principles does your organization follow in executing an agile process?