Tag Archives: SDLC

Applying Lean Techniques in a Big Company: The “Hothouse”

Large companies are always trying to find ways to move at “startup speed” or “digital speed”. The challenge often times is quite simply people: there are just too many of them to keep aligned in order to move swiftly. For any initiative, there are usually multiple stakeholders and affected parties. That means multiple opinions and feedback cycles.

It’s easy to say decision making should be centralized, but the reality is it’s much harder to execute in practice. Even a 1,000-person company has multiple departments, and bigger companies often have sub-departments. If I’m driving a new initiative that will materially impact the customer and/or the business, the fact of the matter is I need to ensure I’m actively coordinating with many of these groups throughout the process, like marketing, operations, call centers, brand, legal, IT/engineering, design, etc. That means not only coordinating with those departments heads (usually VPs), but also their lieutenants (usually Directors) who are responsible for execution within their domains and control key resources, and thus have a material influence over the decisions of the department heads.

In addition, large companies are often crippled by their own processes. Stage-Gate type implementations can be particularly notorious for slowing things down with the plethora of paperwork and approvals they tend to involve.

All of this means tons of emails, socialization meetings, documentation, and needless deck work. All of which is a form of waste, because it prevents true forward progress in terms of driving decision making, developing solutions, and getting them to market.

Initiatives involving UI are particularly susceptible to this sort of corporate bureaucracy for the simple reason that UI is visual and therefore easy to react to, and everyone feels entitled to opine on the user experience. Once, one of my product managers spent a month trying to get feedback and approvals from a handful of senior stakeholders on the UX direction for his initiative. A month! Why did it take so long? For the simple and very real reason that it was difficult to get all these senior leaders in the same room at the same time. What a waste of time!

So how to solve for this? Several years ago, my colleagues and I faced this exact challenge. While an overhaul of the entire SDLC was needed, that takes time in a large organization. What we needed was something that could accelerate decision making, involve both senior stakeholders and the project team, yet be implemented quickly. That’s when we hit upon our true insight: What we needed was to apply lean thinking to the process of gaining consensus on key business decisions.

And that’s how we adopted an agile innovation process that we called a “Hothouse.” A Hothouse combines the Build-Measure-Learn loop from Lean Startup with design thinking principles and agile development into a 2- to 3-day workshop in which senior business leaders work with product development teams through a series of iterative sprints to solve key business problems.

That’s a mouthful. Let’s break it down.

The Hothouse takes place typically over 2 or 3 days. One to three small Sprint teams are assembled to work on specific business problems throughout those 2-3 days in a series of Creative Sprints that are typically about 3 hours each. (4 hours max, 2.5 hours minimum.) Between each Creative Sprint is a Design Review in which the teams present the deliverables from their last sprint to senior leaders, who provide constructive, actionable feedback to the teams. The teams take this feedback into the next Creative Sprint.

This iterative workflow of Creative Sprints and Design Reviews follows the Build-Measure-Learn meta pattern from Lean Startup:

Hothouse Build-Measure-Learn

During the Creative Sprints, teams pursue the work using design thinking and agile principles, like reframing the business challenge, collaborative working between business and developers, ideation, user-centric thinking, using synthesis for problem solving, rapid prototyping, iterative and continuous delivery, face-to-face conversation as the primary means of communication, and team retrospections at the start of each sprint.

The Hothouse is used to accelerate solution development for a small handful of key business problems. So job #1 is to determine the specific business problems you want to solve in the Hothouse. The fewer, the better, as a narrow scope allows for more focused, efficient and faster solution development. For each business problem, teams bring in supporting material, such as existing customer research, current state user experience, business requirements, prototypes, architectural maps, etc. as inputs into the Hothouse. The expected outputs from the Hothouse depend on the business problems being addressed and the specific goals of the Hothouse, but can take the form of a refined and approved prototype, prioritized business requirements or stories, system impacts assessment, high-level delivery estimates, and even a marketing communication plan.

Hothouse process

At the end of the Hothouse, the accepted outputs becomes the foundation for further development post-Hothouse.

I’ve been part of numerous Hothouses, both as a participant and facilitator, and I’ve seen Hothouses applied to solve business challenges of varying scope and scale. For example:

  • Re-design of a web page or landing page.
  • Design of a user flow through an application.
  • Development of specific online capabilities, such as online registration and customer onboarding.
  • A complex re-platforming project involving migration from an old system to a new one with considerations for customer and business impacts.
  • An acquisition between F1000 companies.

The benefits to a large organization are manifold:

  • Accelerates decision making. What typically takes weeks or months is completed in days.
  • Senior leadership involvement means immediate feedback and direction for the project team.
  • Ensures alignment across all stakeholders and teams. The folks most directly impacted — senior leadership and the project delivery team — are fully represented. By the end of the Hothouse, everyone is on the same page on everything: the business problems and goals, proposed solutions, high-level system impacts, potential delivery trade-offs, priorities, and next steps.
  • This alignment serves as a much-needed baseline for the project post-Hothouse.
  • Bottom-line is faster product definition and solution development, which speeds delivery time-to-market.

A Hothouse can help you generate innovative solutions to your organization’s current problems, faster and cheaper. More details here. If you’re interested in learning more about agile innovation processes like the Hothouse, or how to implement one at your organization, reach out to me via Twitter or LinkedIn.

Disclaimer: I didn’t come up with the term Hothouse. I don’t know who did. But it’s a name we used internally, and it stuck. I think the original methodology comes from the UK, but I’m not sure. If you happen to know if the name is trademarked, please let me know and I’ll be happy to add the credit to this post.

Why It’s Better To Be Smaller When Implementing Agile In A Large Company

Having done many waterfall projects, I was recently part of an effort to move a large organization to an agile software delivery process after years of following waterfall. I’ll be blunt: it was downright painful. That said, I’d pick agile over the mutlti-staged, paper intensive, meeting heavy, PMO driven waterfall process I encountered when I joined the organization.

Although the shift was painful, it was a terrific educational experience. Based on lessons learned, we adopted certain principles to guide our approach to implementing agile in the organization.

Dream big. Think smaller.

This means having a vision for what the solution will look like and the benefits it will provide customers, but then boiling it down to specifics to be able to execute. For example, at one of my former gigs, we had identified the need to make improvements to our online payments process, and captured over 20 different enhancements on a single slide under the title of “Payment Enhancements”. (Yes, in very tiny font, like 8-point.) Those enhancements were beyond simple things like improving copy or the layout of elements. Each enhancement would have involved material impacts to back-end processes. As such, “Payment Enhancements” is not an epic, or at least, it’s a super big and super nebulous one that cannot be measured. Rather, I argued that each bullet on that 1-pager could be considered an epic in and of itself that could be placed on the roadmap and would need to be further broken down into stories for execution purposes.

Thinking smaller also means considering launching the capability to a smaller subset of customers. Even when pursuing an enhancement to an existing product, it’s important to ask whether the enhancement will truly benefit all customers using your product or whether it needs to be made available to all customers on day 1. Benefits of identifying an early adopter segment: (1) get code out faster, (2) lower customer impact, (3) get customer feedback sooner that can be acted on.

Be sharp and ruthless about defining the MVP.

Lean Startup defines MVP (Minimum Viable Product) as “that version of the product that allows the team to collect the maximum amount of validated learning from customers”.

(We think) we know the problem. We don’t know for certain the solution. We have only a vision and point-of-view on what it could be. We will only know for certain we have a viable solution when customers tell us so because they use it. So identify what are the top customer problems we’re trying to solve, the underlying assumptions in our proposed solution, and what we really need to learn from our customers. Then formulate testable hypotheses and use that to define our MVP.

Make validated learning the measure

In the war of SDLCs, I’m no blanket waterfall basher nor true believer of agile. But from having done a number of waterfall projects I’ve observed that it’s typically been managed by what I call “management by date”, or more often than not, make-believe date.

As human beings, we like certainty. A date is certain. So setting a date is something that we feel can be measured, in part because a date feels real, it gives us a target, and in part probably because over decades we’ve become so accustomed to using date-driven project management to drive our product development efforts. The problem becomes that this gets us into the classic scope-time-budget headache, which means we’re now using those elements as the measure of our progress.

The thing is, scope, time and budget mean absolutely nothing to the customer. What really matters is whether customers find value in the solution we are trying to provide them. Traditional product development and project management practices don’t allow us to measure that until product launch, by which time it may be too late.

So we need to make learning the primary goal, not simply hitting a release date, which is really a check-the-box exercise and means nothing. Nothing beats direct customer feedback. We don’t know what the solution is until customers can get their hands on it. So instead of working like crazy to hit a release date, work like crazy to get customer validation. That allows us to validate our solution (MVP) and pivot as necessary.

Focus always, always on delivering a great user experience

Better to have less functionality that delivers a resonating experience than more that compromises usability. A poor UX directly impacts the value proposition of our solution. We need look no further than Apple’s stumble on the iPhone 5 Maps app. (Ironic.)

Continuous deployment applies not just to agile delivery, but also the roadmap

Over four years ago, Saeed Khan posted a nice piece on roadmaps where he said:

A roadmap is a planned future, laid out in broad strokes — i.e. planned or proposed product releases, listing high level functionality or release themes, laid out in rough timeframes — usually the target calendar or fiscal quarter — for a period usually extending for 2 or 3 significant feature releases into the future.

The roadmap is just that: a high-level map to achieve a vision. Not a calendar of arbitrary dates to hit. Too many roadmaps seem to suffer from the same date-driven project management approach.

For most established software products, I typically advocate having at least a 12-month roadmap that communicates the direction to be taken to achieve the vision and big business goals. It identifies targeted epics to achieve that vision. The vision is boiled down to a more tangible 3-month roadmap. That’s the stuff we want to get done in the next 3 months and what the agile teams need to work on.

Create an accountable person or body that actively looks at the roadmap on a monthly and quarterly basis. On a monthly basis, this body helps the agile Product Owner(s) prioritize the backlog against the 3-month roadmap. On a quarterly basis, this body evaluates overall progress against the 12-month roadmap. As such, 12 months is a rolling period, not an annual calendar of unsubstantiated promises of delivery.

What has your experience been implementing agile in your organization? What principles does your organization follow in executing an agile process?

The need for speed

In this post, I’d like to talk about using a “co-location” approach for defining and developing a software product, and I’d love to get your thoughts. More about what I mean by co-located product development in a moment. First, some broader context.

As human beings, we want things to be faster, better, cheaper. And that’s certainly true of software development. Many methodologies have been invented to get software out to the marketplace quicker: waterfall, CMM, UML, Stage-Gate, rapid prototyping, extreme programming, user centered design, and, of course, agile. Recently I’ve started exploring the Lean Startup methodology pioneered by Eric Ries, which is a sort of lean software development process geared for startups that uses the principles of reducing waste, iterative development cycles with constant customer feedback loops, and metrics measured outcomes to achieve product/market fit.

One other methodology (if you can call it that) I’ve experienced in the past is a “co-location” approach. I basically describe this as “identify a business opportunity to solve for; then take a bunch of people across the company to represent the business, implementation, operations and IT, throw them into a room, and don’t let them out till they’ve designed the product / solved the problem.” The thinking here is that if we take our best and brightest, and focus them in singular fashion on solving a particular problem under a tight deadline, they’re bound to come up with magic, right? Isn’t that what saved the day in Apollo 13?

I’ll be honest. Whenever I hear co-location, I have a major ugh moment. Here’s why. I’ve experienced co-location twice. Once was a situation where the business problem was outlined, the group ideated for a considerable amount of time, and then quickly began defining the product. Because IT (or IS or Software Engineering or whatever your organization calls it) was also involved, they could provide important system knowledge to ground the conversation technically, and at the end of the multi-week process provide an estimate as to the level of effort (LOE) involved in developing the solution. The LOE would be a pointer to development cost and time to market, a key consideration used by senior management in prioritizing projects. Another was in a war room type situation where due to a large number of quality issues with the software development, the product launch was in critical jeopardy, so an emergency “all hands on deck” situation was called where the entire dev staff + QA + product management were put into a room together to get the software out on time.

Both situations involved a fairly large group of smart individuals working collaboratively everyday over a multi-week period for a desired outcome. Both situations resulted in a shoddy and costly product that received at best lukewarm responses from end-customers.

Here’s why they failed:

  • Focus on the wrong outcome. In my first scenario, the situation was set up with the right intentions, but because getting an LOE was the identified output from the exercise, the focus inevitably and quickly became defining a product with a low LOE, with low LOE being assumed as a proxy for faster time to market. In my second scenario, the focus became a race to production, leading to an inevitable sacrifice in – yep – quality.
  • Lack of product definition. This applies more to my first scenario. The desired product was described in broad strokes (like describing a flashlight as “having the ability to light the room when turned on”), but these broad strokes meant IT was dangerously filling in the gaps, leading to an overly inflated and ultimately discardable LOE.
  • Product definition by committee. Everyone has an opinion. So in a co-located situation, everyone’s opinion is heard, none are discarded, and in the end you have a diluted product that may satisfies everyone’s egos, but doesn’t truly solve the customer’s problem. Pragmatic Marketing Rules #4 and #6 are violated here:
    • #4: “The building is full of product experts. Your company needs market experts.”
    • #6: “Your opinion, while interesting, is irrelevant.”
  • Product definition by the business person. This is the opposite of the above. If one or more business people are in the room (this could be the marketing director, business development or sales guy, or a GM), there is a tendency for everyone else to defer to them. The non-business people will rightfully look at them asking “So what do you want?” The business folks have an understandably deeply vested interest in the success of the product (because after all, they’re paying for it!), and so will want to control how the product is defined. The problems are: (1) business folks are typically terrible at product definition, and (2) none of these folks are market experts, which brings us to the next problem.
  • Lack of market, especially customer, input. This is perhaps the most cardinal sin of all, unforgivable violations of Pragmatic Marketing Rules #2 and #8:
    • #2: “An outside-in approach increases the likelihood of product success.” Co-location strikes me as definitely a tune-out approach.
    • #8: “The answer to most of your questions is not in the building.” Customer research may have been done upfront in defining the business opportunity, but is often neglected in validating potential solutions during the co-location effort.

Another issue I have with co-location is the resource commitment. Co-location requires a bunch of people across the enterprise to be committed to the process for a fixed length of time. That means anything else those individuals may have been working on at the time must be dropped or put on hold. The larger the organization, the larger the ripple effect, as each of these individuals are likely working on other initiatives with others folks across the enterprise not a part of the co-location effort. This means business comes to a standstill for those efforts. This seems inefficient and wasteful.

The Apollo 13 approach may be great in emergency situations, when a creative solution is needed for an extraordinary situation. Co-location may make sense when there is a need to solve a problem that is immediate and urgent. But it does not strike me as a sustainable way to define and develop software products.

So why is co-location used? I can identify two reasons. First, back to my opening remarks. There is a need for speed. We have a business problem, and if we get our best folks to focus on it, we should be able to get to a solution quickly. Voila! Problem solved in a few short weeks!

Second, prioritization. No business can pursue all opportunities at once. Senior management must constantly make difficult trade-off decisions on which problems to solve, products to develop, and projects to fund. An LOE is a critical piece of input into prioritization decisioning. Co-location’s appeal is that an LOE can be produced in a short timeframe to help with that decisioning, and every key stakeholder has been a part of that process.

The problem is this is all an illusion. Co-location gives the perception of speed. The LOE is typically either inflated, because of the lack of proper product definition, or is gratifyingly low only because the product has been so watered down as to render it ineffective in truly solving the customer’s problem. Either way, quality suffers.

Now, I admit this has been my personal experience, and I certainly don’t profess my experience to be a proxy for wider market truth. (My opinion, while interesting, is irrelevant.) So I shall now take an outside-in approach and tune-in to hear from you. Have you ever experienced a successful co-located product development effort? Does co-location ever make sense for software product definition and development? If so, when? Here’s a really fun question: if your management believed in co-location and you did not, how would you convince them otherwise? Are there outcome defined ways, quantitative or measured otherwise, to prove your case?

Like This!

Add to: Facebook | Digg | Del.icio.us | Stumbleupon | Reddit | Blinklist | Twitter | Technorati | Yahoo Buzz | Newsvine