Category Archives: Product Development

How To Define An MVP: A Case Study

In my last post, I talked about how a minimum viable product (MVP) is not the smallest collection of features to be delivered. An MVP is basically an in-market experiment of a product idea that involves delivering real product to actual customers to get their feedback.

An MVP can be tested whether your idea is a brand new product or a new feature for an existing product.

And even if your product is software, your MVP doesn’t necessarily have to be software too.

Folks may be familiar with how Groupon started as a WordPress blog, called “The Daily Groupon”, on which the team posted daily discounts, restaurant gift certificates, concert vouchers, movie tickets, and other deals in Chicago area.

Food On The Table, a family meal planning and grocery shopping site eventually acquired by the Food Network, started by working with their customers individually, creating meal plans and shopping lists for them on spreadsheets and email, and then bought and delivered food items themselves.

So how do you go about defining an MVP for your product idea?

It starts with having a hypothesis for what features or capabilities you believe need to be delivered to your target customer in order to provide them value.

This is predicated on having done the hard upfront work of validating your customer’s problem (that it exists, it’s urgent, and pervasive), and then maybe even having tested a prototype of your solution vision.

If you feel you have a good enough understanding of your customer’s problem (pain point, job to be done, etc.), use that as a basis to identify what you believe are the must-have features for your MVP that are aligned with your solution vision.

Then test that MVP with real customers. Evaluate your results. Rinse and repeat.

To make this more tangible, here’s an example from my own experience.

For a product idea we had, we wanted to test our understanding of our customers’ top problems and get directional feedback on our solution approach. Directional feedback meant identifying the “right” handful of features to build first for early customers.

Based on some early customer conversations and market research, we developed a view of the problem domain. We sketched out our product vision on the Product CanvasTM, which allowed us to break down the problem domain into discrete problems and formulate testable falsifiable hypotheses around what we believed to be the top problems that our solution absolutely had to solve for first.

We built a clickable mockup defined by the key elements of our solution captured in our Product Canvas exercise. To keep things simple, we built a screen for each discrete problem to represent our solution vision — real html and css, in color, no lorem ipsum, with clickable interactions to represent the primary workflow through the screens.

We didn’t build out every interaction — just the main ones. We formulated a testable falsifiable hypothesis around the ability of each screen to solve a specific problem.

We then set up a number of customer interviews to test our problem hypotheses. During these customer conversations, we listened carefully to fully understand our customers’ world views and their current work flows, even noting the emotions in their voice and their body language (during in-person meetings, when we could do them) as they discussed their challenges and reacted to our screens.

We were deliberate and meticulous about documenting the results.

It turned out that while we had identified a viable problem domain, our view of what early customers considered as their chief problems was invalidated. We also learned that while our solution approach was generally in the right direction, there were features that we had not envisioned that early customers considered as must-have’s in the initial delivery.

As a massive bonus, we were actually able to garner a handful of very early customers who were willing to co-test the solution with us, further validating the fact that we had pricked a real pain point and were directionally correct in our solution approach.

As a primary outcome of this work we were able to understand our customers’ problem at a granular level, which helped prioritize the initial set of features to build. That drove the definition of the minimum viable product version of our solution.

And that’s what we did. We built just those features, and nothing else, and delivered it to those handful of early customers.

In fact, our first MVP wasn’t software. Our first MVP was more a concierge type service, sort of like what Food On The Table did — we “manually” delivered the service to each customer individually.

We learned a ton of really useful stuff. Things like what was really important to the customer, what features of the service they used more often than others, real insights into their workflow and how our solution could help improve it, and — crucially — what they were willing to pay for.

We used these learnings to then define a software MVP, and deliver it to early committed customers. The learnings from our “concierge” MVP experiment helped boost our confidence in defining the requirements for our software MVP. In other words, it was much less of a guess than it otherwise would have been.

We didn’t really bother with calling the software MVP a “release 1.0” or “version 1.0”, because that was irrelevant. We just focused on testing the solution until we received customer validation that it was truly providing value.

That gave us the confidence to know our product idea was “good to go” to scale up, put some real sales and marketing muscle behind it, and sell to more customers.

There’s no one way necessarily to approach an MVP. This is just one example of an approach. As Eric Ries states, defining an MVP is not formulaic: “It requires judgment to figure out, for any given context, what MVP makes sense.” Hopefully, this example gives you a template to define and test your own minimum viable product for your next great product idea.

I’ve created a handy primer on what is a minimum viable product. Download it below. I hope it helps you to become a pro at defining an MVP for your next great product idea!

 

An MVP Is Not The Smallest Collection Of Features You Can Deliver

Source: Spotify

Source: Spotify

There’s a lot of discussion and confusion about what is and isn’t a minimum viable product (MVP).

Worse, many execs have latched on to the term without really understanding what truly constitutes an MVP — many use it as a buzzword, and as a synonym to mean a completed version 1.0 ready to be sold to all customers.

Buzzwords are meaningless. They represent lazy thinking. And using “MVP” to mean “first market launch” or “first customer ship” means you’re back to the old waterfall, traditional project-driven software development, sales-focused approach. If that’s your approach, fine. Just don’t call what you’re delivering an MVP.

On the flip side, lots of folks in the enterprise world, including in product management, over-think the term. It gets lost in the clever nuances of market maturity, and a long entrenchment in the world of release dates and feature-based requirements thinking.

Many folks think of MVP as simply the smallest collection of features to deliver to customers. Wrong. It’s not.

The problem with that approach is it assumes we know ahead of time exactly what will satisfy customers. Even if we’ve served them for years, odds are when it comes to a new product or feature, we don’t.

Now, the challenge with the concept of a minimum viable product is it constitutes an entirely different way of thinking about our approach to product development.

It’s not about product delivery actually — in other words, it’s not about delivering product for the sake of delivering it or to hit a deadline.

An MVP is about validated learning.

As such, it puts customers’ problems squarely at the center, not our solution.

Reality check: Customers don’t care about your solution. They care about their problems. Your solution, while interesting, is irrelevant.

So if we’re going to use the term “MVP”, it’s important to understand what it really means.

Fortunately, all it takes to do that is to go back to the definition.


Download The Handy Primer “What Is An MVP?” >>


Minimum Viable Product (MVP) is a term coined by Eric Ries as part of his Lean Startup methodology, which lays out a framework for pursuing a startup in particular, and product innovation more generally. This means we need to understand the methodology of Lean Startup to have the right context for using terms like “MVP”. (Just like we shouldn’t use “product backlog” from Agile as a synonym for “dumping ground for all possible feature ideas”.)

Eric lays out a definition for what is an MVP:

“The minimum viable product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.”

Eric goes on to explain exactly what he means (emphasis mine):

MVP, despite the name, is not about creating minimal products… In fact, MVP is quite annoying, because it imposes extra overhead. We have to manage to learn something from our first product iteration. In a lot of cases, this requires a lot of energy invested in talking to customers or metrics and analytics.

Second, the definition’s use of the words maximum and minimum means an MVP is decidedly not formulaic. It requires judgment to figure out, for any given context, what MVP makes sense.

Let’s break this down.

1. An MVP is a product. This means it must be something delivered to customers that they can use.

There’s a lot that’s been written about creating landing pages, mockups, prototypes, doing smoke tests, etc., and considering them as forms of MVPs. While these are undoubtedly worthwhile, and certainly “lean”, efforts to gain valuable learnings, they are not products. Read Ramli John‘s excellent post on “A Landing Page Is NOT A Minimum Viable Product“.

A product must attempt to deliver real value to customers. So a minimum viable product is an attempt — an experiment — to deliver real value to customer.

Which leads us to…

2. An MVP is viable. This means it must try to tangibly solve real world and urgent problems faced by your target customers. An MVP must attempt to deliver value.

So it’s not about figuring out the smallest collection of features. It’s about making sure we’ve understood our customers’ top problems, and figuring out how to deliver a solution to those problems in a way that early customers are willing to “pay” for. (“Pay” in quotes as it depends on your business model.)

If we can’t viably solve early customers’ primary problems, everything else is moot. That is why an MVP is about validated learning.

3. An MVP is the minimum version of your product vision. A few years ago, I had to build an online form builder app that would allow customers to create online payment forms without the need to write any HTML or worry about connecting to a payment gateway. Before having our developers write a single line of code to build the product, we first offered customers the capability as a service: we would get their specs, and then manually build and deliver each online payment form one-by-one, customer-by-customer. Customers would pay us for this service.

This “concierge” type service was our MVP version of our product vision. Of course, it wasn’t scalable. But we learned a heck of a lot: most common types of payment forms they wanted, what was most important to them in a form, frequency of wanting to make changes, reporting needs, and how they perceived the value of the service.

We parlayed these learnings into developing the software app itself — which, by the way, we delivered as an MVP to early customers to whom we had pre-sold the software product. (Yes, we delivered two different types of MVPs!)

Whether you take a “concierge” approach or your MVP is actual code, it most definitely does NOT mean it’s a half-baked or buggy product. (Remember viable from above?)

It DOES mean critically thinking through the absolute necessary features your product will need day 1 to solve your early customers’ top problems, focusing on delivering those first, and putting everything else on the backlog for the time being. It also means being very deliberate about finding those “earlyvangelists” that Steve Blank always talks about.

Ultimately, the key here is “maximum amount of validated learning”. This means being systematic about identifying your riskiest assumptions, formulating testable falsifiable hypotheses around these, and using an MVP — a minimum viable product version of your product vision — to prove or disprove your hypotheses.

Now, validated learning can certainly be accomplished via a landing page, mockup, wireframes, etc. And it may make sense to do these things. Super. But don’t call them MVPs, because while they may deliver value to you and your product idea, they’re not delivering actual value to the customer.

At the same time, the traditional product management exercise of identifying all the features of a product, force ranking them, and then drawing a line through the list to identify the smallest collection to be delivered by a given timeframe is not an MVP. Why? Because this approach is not predicated on maximizing validated learning. If you’re going to pursue this approach, go ahead and call it Release 1.0, Version 1.0, “Beta”, whatever. But don’t call it an MVP.

An MVP is about not just the solution we’re delivering, but also the approach. The key is maximizing validated learning.

I’ve created a handy primer on what is a minimum viable product. Download it below. I hope it helps you to become a pro at defining an MVP for your next great product idea!


Download The Handy Primer “What Is An MVP?” >>


Applying Lean Techniques in a Big Company: The “Hothouse”

Large companies are always trying to find ways to move at “startup speed” or “digital speed”. The challenge often times is quite simply people: there are just too many of them to keep aligned in order to move swiftly. For any initiative, there are usually multiple stakeholders and affected parties. That means multiple opinions and feedback cycles.

It’s easy to say decision making should be centralized, but the reality is it’s much harder to execute in practice. Even a 1,000-person company has multiple departments, and bigger companies often have sub-departments. If I’m driving a new initiative that will materially impact the customer and/or the business, the fact of the matter is I need to ensure I’m actively coordinating with many of these groups throughout the process, like marketing, operations, call centers, brand, legal, IT/engineering, design, etc. That means not only coordinating with those departments heads (usually VPs), but also their lieutenants (usually Directors) who are responsible for execution within their domains and control key resources, and thus have a material influence over the decisions of the department heads.

In addition, large companies are often crippled by their own processes. Stage-Gate type implementations can be particularly notorious for slowing things down with the plethora of paperwork and approvals they tend to involve.

All of this means tons of emails, socialization meetings, documentation, and needless deck work. All of which is a form of waste, because it prevents true forward progress in terms of driving decision making, developing solutions, and getting them to market.

Initiatives involving UI are particularly susceptible to this sort of corporate bureaucracy for the simple reason that UI is visual and therefore easy to react to, and everyone feels entitled to opine on the user experience. Once, one of my product managers spent a month trying to get feedback and approvals from a handful of senior stakeholders on the UX direction for his initiative. A month! Why did it take so long? For the simple and very real reason that it was difficult to get all these senior leaders in the same room at the same time. What a waste of time!

So how to solve for this? Several years ago, my colleagues and I faced this exact challenge. While an overhaul of the entire SDLC was needed, that takes time in a large organization. What we needed was something that could accelerate decision making, involve both senior stakeholders and the project team, yet be implemented quickly. That’s when we hit upon our true insight: What we needed was to apply lean thinking to the process of gaining consensus on key business decisions.

And that’s how we adopted an agile innovation process that we called a “Hothouse.” A Hothouse combines the Build-Measure-Learn loop from Lean Startup with design thinking principles and agile development into a 2- to 3-day workshop in which senior business leaders work with product development teams through a series of iterative sprints to solve key business problems.

That’s a mouthful. Let’s break it down.

The Hothouse takes place typically over 2 or 3 days. One to three small Sprint teams are assembled to work on specific business problems throughout those 2-3 days in a series of Creative Sprints that are typically about 3 hours each. (4 hours max, 2.5 hours minimum.) Between each Creative Sprint is a Design Review in which the teams present the deliverables from their last sprint to senior leaders, who provide constructive, actionable feedback to the teams. The teams take this feedback into the next Creative Sprint.

This iterative workflow of Creative Sprints and Design Reviews follows the Build-Measure-Learn meta pattern from Lean Startup:

Hothouse Build-Measure-Learn

During the Creative Sprints, teams pursue the work using design thinking and agile principles, like reframing the business challenge, collaborative working between business and developers, ideation, user-centric thinking, using synthesis for problem solving, rapid prototyping, iterative and continuous delivery, face-to-face conversation as the primary means of communication, and team retrospections at the start of each sprint.

The Hothouse is used to accelerate solution development for a small handful of key business problems. So job #1 is to determine the specific business problems you want to solve in the Hothouse. The fewer, the better, as a narrow scope allows for more focused, efficient and faster solution development. For each business problem, teams bring in supporting material, such as existing customer research, current state user experience, business requirements, prototypes, architectural maps, etc. as inputs into the Hothouse. The expected outputs from the Hothouse depend on the business problems being addressed and the specific goals of the Hothouse, but can take the form of a refined and approved prototype, prioritized business requirements or stories, system impacts assessment, high-level delivery estimates, and even a marketing communication plan.

Hothouse process

At the end of the Hothouse, the accepted outputs becomes the foundation for further development post-Hothouse.

I’ve been part of numerous Hothouses, both as a participant and facilitator, and I’ve seen Hothouses applied to solve business challenges of varying scope and scale. For example:

  • Re-design of a web page or landing page.
  • Design of a user flow through an application.
  • Development of specific online capabilities, such as online registration and customer onboarding.
  • A complex re-platforming project involving migration from an old system to a new one with considerations for customer and business impacts.
  • An acquisition between F1000 companies.

The benefits to a large organization are manifold:

  • Accelerates decision making. What typically takes weeks or months is completed in days.
  • Senior leadership involvement means immediate feedback and direction for the project team.
  • Ensures alignment across all stakeholders and teams. The folks most directly impacted — senior leadership and the project delivery team — are fully represented. By the end of the Hothouse, everyone is on the same page on everything: the business problems and goals, proposed solutions, high-level system impacts, potential delivery trade-offs, priorities, and next steps.
  • This alignment serves as a much-needed baseline for the project post-Hothouse.
  • Bottom-line is faster product definition and solution development, which speeds delivery time-to-market.

A Hothouse can help you generate innovative solutions to your organization’s current problems, faster and cheaper. More details here. If you’re interested in learning more about agile innovation processes like the Hothouse, or how to implement one at your organization, reach out to me via Twitter or LinkedIn.

Disclaimer: I didn’t come up with the term Hothouse. I don’t know who did. But it’s a name we used internally, and it stuck. I think the original methodology comes from the UK, but I’m not sure. If you happen to know if the name is trademarked, please let me know and I’ll be happy to add the credit to this post.

Why It’s Better To Be Smaller When Implementing Agile In A Large Company

Having done many waterfall projects, I was recently part of an effort to move a large organization to an agile software delivery process after years of following waterfall. I’ll be blunt: it was downright painful. That said, I’d pick agile over the mutlti-staged, paper intensive, meeting heavy, PMO driven waterfall process I encountered when I joined the organization.

Although the shift was painful, it was a terrific educational experience. Based on lessons learned, we adopted certain principles to guide our approach to implementing agile in the organization.

Dream big. Think smaller.

This means having a vision for what the solution will look like and the benefits it will provide customers, but then boiling it down to specifics to be able to execute. For example, at one of my former gigs, we had identified the need to make improvements to our online payments process, and captured over 20 different enhancements on a single slide under the title of “Payment Enhancements”. (Yes, in very tiny font, like 8-point.) Those enhancements were beyond simple things like improving copy or the layout of elements. Each enhancement would have involved material impacts to back-end processes. As such, “Payment Enhancements” is not an epic, or at least, it’s a super big and super nebulous one that cannot be measured. Rather, I argued that each bullet on that 1-pager could be considered an epic in and of itself that could be placed on the roadmap and would need to be further broken down into stories for execution purposes.

Thinking smaller also means considering launching the capability to a smaller subset of customers. Even when pursuing an enhancement to an existing product, it’s important to ask whether the enhancement will truly benefit all customers using your product or whether it needs to be made available to all customers on day 1. Benefits of identifying an early adopter segment: (1) get code out faster, (2) lower customer impact, (3) get customer feedback sooner that can be acted on.

Be sharp and ruthless about defining the MVP.

Lean Startup defines MVP (Minimum Viable Product) as “that version of the product that allows the team to collect the maximum amount of validated learning from customers”.

(We think) we know the problem. We don’t know for certain the solution. We have only a vision and point-of-view on what it could be. We will only know for certain we have a viable solution when customers tell us so because they use it. So identify what are the top customer problems we’re trying to solve, the underlying assumptions in our proposed solution, and what we really need to learn from our customers. Then formulate testable hypotheses and use that to define our MVP.

Make validated learning the measure

In the war of SDLCs, I’m no blanket waterfall basher nor true believer of agile. But from having done a number of waterfall projects I’ve observed that it’s typically been managed by what I call “management by date”, or more often than not, make-believe date.

As human beings, we like certainty. A date is certain. So setting a date is something that we feel can be measured, in part because a date feels real, it gives us a target, and in part probably because over decades we’ve become so accustomed to using date-driven project management to drive our product development efforts. The problem becomes that this gets us into the classic scope-time-budget headache, which means we’re now using those elements as the measure of our progress.

The thing is, scope, time and budget mean absolutely nothing to the customer. What really matters is whether customers find value in the solution we are trying to provide them. Traditional product development and project management practices don’t allow us to measure that until product launch, by which time it may be too late.

So we need to make learning the primary goal, not simply hitting a release date, which is really a check-the-box exercise and means nothing. Nothing beats direct customer feedback. We don’t know what the solution is until customers can get their hands on it. So instead of working like crazy to hit a release date, work like crazy to get customer validation. That allows us to validate our solution (MVP) and pivot as necessary.

Focus always, always on delivering a great user experience

Better to have less functionality that delivers a resonating experience than more that compromises usability. A poor UX directly impacts the value proposition of our solution. We need look no further than Apple’s stumble on the iPhone 5 Maps app. (Ironic.)

Continuous deployment applies not just to agile delivery, but also the roadmap

Over four years ago, Saeed Khan posted a nice piece on roadmaps where he said:

A roadmap is a planned future, laid out in broad strokes — i.e. planned or proposed product releases, listing high level functionality or release themes, laid out in rough timeframes — usually the target calendar or fiscal quarter — for a period usually extending for 2 or 3 significant feature releases into the future.

The roadmap is just that: a high-level map to achieve a vision. Not a calendar of arbitrary dates to hit. Too many roadmaps seem to suffer from the same date-driven project management approach.

For most established software products, I typically advocate having at least a 12-month roadmap that communicates the direction to be taken to achieve the vision and big business goals. It identifies targeted epics to achieve that vision. The vision is boiled down to a more tangible 3-month roadmap. That’s the stuff we want to get done in the next 3 months and what the agile teams need to work on.

Create an accountable person or body that actively looks at the roadmap on a monthly and quarterly basis. On a monthly basis, this body helps the agile Product Owner(s) prioritize the backlog against the 3-month roadmap. On a quarterly basis, this body evaluates overall progress against the 12-month roadmap. As such, 12 months is a rolling period, not an annual calendar of unsubstantiated promises of delivery.

What has your experience been implementing agile in your organization? What principles does your organization follow in executing an agile process?

The need for speed

In this post, I’d like to talk about using a “co-location” approach for defining and developing a software product, and I’d love to get your thoughts. More about what I mean by co-located product development in a moment. First, some broader context.

As human beings, we want things to be faster, better, cheaper. And that’s certainly true of software development. Many methodologies have been invented to get software out to the marketplace quicker: waterfall, CMM, UML, Stage-Gate, rapid prototyping, extreme programming, user centered design, and, of course, agile. Recently I’ve started exploring the Lean Startup methodology pioneered by Eric Ries, which is a sort of lean software development process geared for startups that uses the principles of reducing waste, iterative development cycles with constant customer feedback loops, and metrics measured outcomes to achieve product/market fit.

One other methodology (if you can call it that) I’ve experienced in the past is a “co-location” approach. I basically describe this as “identify a business opportunity to solve for; then take a bunch of people across the company to represent the business, implementation, operations and IT, throw them into a room, and don’t let them out till they’ve designed the product / solved the problem.” The thinking here is that if we take our best and brightest, and focus them in singular fashion on solving a particular problem under a tight deadline, they’re bound to come up with magic, right? Isn’t that what saved the day in Apollo 13?

I’ll be honest. Whenever I hear co-location, I have a major ugh moment. Here’s why. I’ve experienced co-location twice. Once was a situation where the business problem was outlined, the group ideated for a considerable amount of time, and then quickly began defining the product. Because IT (or IS or Software Engineering or whatever your organization calls it) was also involved, they could provide important system knowledge to ground the conversation technically, and at the end of the multi-week process provide an estimate as to the level of effort (LOE) involved in developing the solution. The LOE would be a pointer to development cost and time to market, a key consideration used by senior management in prioritizing projects. Another was in a war room type situation where due to a large number of quality issues with the software development, the product launch was in critical jeopardy, so an emergency “all hands on deck” situation was called where the entire dev staff + QA + product management were put into a room together to get the software out on time.

Both situations involved a fairly large group of smart individuals working collaboratively everyday over a multi-week period for a desired outcome. Both situations resulted in a shoddy and costly product that received at best lukewarm responses from end-customers.

Here’s why they failed:

  • Focus on the wrong outcome. In my first scenario, the situation was set up with the right intentions, but because getting an LOE was the identified output from the exercise, the focus inevitably and quickly became defining a product with a low LOE, with low LOE being assumed as a proxy for faster time to market. In my second scenario, the focus became a race to production, leading to an inevitable sacrifice in – yep – quality.
  • Lack of product definition. This applies more to my first scenario. The desired product was described in broad strokes (like describing a flashlight as “having the ability to light the room when turned on”), but these broad strokes meant IT was dangerously filling in the gaps, leading to an overly inflated and ultimately discardable LOE.
  • Product definition by committee. Everyone has an opinion. So in a co-located situation, everyone’s opinion is heard, none are discarded, and in the end you have a diluted product that may satisfies everyone’s egos, but doesn’t truly solve the customer’s problem. Pragmatic Marketing Rules #4 and #6 are violated here:
    • #4: “The building is full of product experts. Your company needs market experts.”
    • #6: “Your opinion, while interesting, is irrelevant.”
  • Product definition by the business person. This is the opposite of the above. If one or more business people are in the room (this could be the marketing director, business development or sales guy, or a GM), there is a tendency for everyone else to defer to them. The non-business people will rightfully look at them asking “So what do you want?” The business folks have an understandably deeply vested interest in the success of the product (because after all, they’re paying for it!), and so will want to control how the product is defined. The problems are: (1) business folks are typically terrible at product definition, and (2) none of these folks are market experts, which brings us to the next problem.
  • Lack of market, especially customer, input. This is perhaps the most cardinal sin of all, unforgivable violations of Pragmatic Marketing Rules #2 and #8:
    • #2: “An outside-in approach increases the likelihood of product success.” Co-location strikes me as definitely a tune-out approach.
    • #8: “The answer to most of your questions is not in the building.” Customer research may have been done upfront in defining the business opportunity, but is often neglected in validating potential solutions during the co-location effort.

Another issue I have with co-location is the resource commitment. Co-location requires a bunch of people across the enterprise to be committed to the process for a fixed length of time. That means anything else those individuals may have been working on at the time must be dropped or put on hold. The larger the organization, the larger the ripple effect, as each of these individuals are likely working on other initiatives with others folks across the enterprise not a part of the co-location effort. This means business comes to a standstill for those efforts. This seems inefficient and wasteful.

The Apollo 13 approach may be great in emergency situations, when a creative solution is needed for an extraordinary situation. Co-location may make sense when there is a need to solve a problem that is immediate and urgent. But it does not strike me as a sustainable way to define and develop software products.

So why is co-location used? I can identify two reasons. First, back to my opening remarks. There is a need for speed. We have a business problem, and if we get our best folks to focus on it, we should be able to get to a solution quickly. Voila! Problem solved in a few short weeks!

Second, prioritization. No business can pursue all opportunities at once. Senior management must constantly make difficult trade-off decisions on which problems to solve, products to develop, and projects to fund. An LOE is a critical piece of input into prioritization decisioning. Co-location’s appeal is that an LOE can be produced in a short timeframe to help with that decisioning, and every key stakeholder has been a part of that process.

The problem is this is all an illusion. Co-location gives the perception of speed. The LOE is typically either inflated, because of the lack of proper product definition, or is gratifyingly low only because the product has been so watered down as to render it ineffective in truly solving the customer’s problem. Either way, quality suffers.

Now, I admit this has been my personal experience, and I certainly don’t profess my experience to be a proxy for wider market truth. (My opinion, while interesting, is irrelevant.) So I shall now take an outside-in approach and tune-in to hear from you. Have you ever experienced a successful co-located product development effort? Does co-location ever make sense for software product definition and development? If so, when? Here’s a really fun question: if your management believed in co-location and you did not, how would you convince them otherwise? Are there outcome defined ways, quantitative or measured otherwise, to prove your case?

Like This!

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Yahoo Buzz | Newsvine