Critique of a lean test and ideas on when to kill it and move on

The following is my critique of a proposed lean test, as well as some ideas on when to kill a lean test off if it isn’t working.

The proposed lean test

The entrepreneur wants to convert their company’s home page banner into a newsletter capture opportunity instead of a ‘Get a Free Session’ opportunity.. this will allow them to capture emails at a faster clip

Hypothesis:

Parents need to sign up for the newsletter and receive valuable content before signing their teen up for a free session with one of our agents. We believe that after collecting 100 parent emails from our homepage banner we can get 10 parents/teens to sign up for a free session.

Experiment:

Gather 100 parent emails through various channels and provide an email with value (our free best selling book, webinar opportunities, our favorite blog posts). Once parent receives email we will follow up and ask if he/she would like a free session.

Metrics for pass/fail:

Pass = 10 or more parents sign up for a free session
Fail = Less than 10 sign up for a free session

My analysis

First, the test is proposed perfectly. There is the hypothesis and what the authors suppose to be true. There is the experiment, or how to test the the validity of the hypothesis. And finally there are metrics, which determine whether the test passes or fails.

Second, my response.

This test is pretty close to being good. I think you are trying to test whether changing the home page banner from a quick kill (get a free session) to an info ask (email capture) will improve bottom line conversions (parents signing up for a free lesson). The test is essentially whether your clients need to be nurtured/given information before being closed. That’s a good thing to find out and has direct implications on your sales channel. (Next time, can you include screenshots too please?)

Two things:
1) For the test to be apples-to-apples, you can’t collect parent emails through “various channels”. You need to run this test with emails collected only from your home page banner.
2) You need a metric with more meaning than 10%. Seems arbitrary to me. If only 9 went to a free session, would you really convert your banner back? A more useful metric is to be better than your home page banner. So if your current banner converts at 9%, you would stick with the newsletter signup if it hits 10%. Admittedly, you’d need bigger numbers than 100 users to make that call. But it’s a start.

Response from the entrepreneur – aka “When do we move on from the idea”

The #2 point is what trips me up.. it is quite arbitrary, but honestly anything is better than our current home page banner. This is a test of a business model as well, and there is not a guarantee that the market actually needs it (although we believe so). My question through all of this is ‘when do we know this idea needs to be crumbled up and move on to the next one?’

Moving on – consider the opportunity cost

My advice is to approach this from an opportunity cost perspective. I don’t think an idea can ever be really “killed”, but it can be lowered down the backlog. What that means is other ideas have a better chance of success, so we’ll downgrade the current idea until there is another chance/more resources/change in the marketplace.

I suggest using a pre-commitment mechanism to decide when to move on. That means drawing an emotional line in the sand before the test. Whatever “feels” right before the test is what you should commit to. I think the challenge you have right now is that you don’t have enough data, so you don’t know what really looks like a pass or a fail. In that case, you need to start running more experiments! After you have a few months under your belt, and much more exposure to the marketplace, you’ll have more data and be in a stronger position.

Here’s another way to think about it: Most entrepreneurs look for any glimmer of hope they can find. But don’t look for glimmers of hope. Look for a smack in the face!! Don’t decide whether a 3% or 5% click through is sufficient. Keep going until you get a 25-50% click through! If people really want something, that’ll be the sign you’re looking for.

Tagged with:
Posted in Lean Startup
2 comments on “Critique of a lean test and ideas on when to kill it and move on
  1. Chantelle says:

    Do you have any advice on when testing is even needed in the first place? And what I really mean is, when the time put into testing is not worth the potential impact of the decision.

    I’m asking this because I’ve experienced a lot of situations where every decision becomes a test. Many times it makes sense. But then there are the times where I question whether we can just make a decision and see how it goes, vs. having to spend time creating a sub-experiment first.

    There are of course many variables that go into my reasoning in the example I am thinking, but to that end I am just wondering if there is any information out there on “testing things to death” or when a test is actually needed and how you can justify pushing back on testing if you think it may not be worth the extra time.

    • Eric says:

      Hi Chantelle, thanks for checking out my blog and your question. You are correct that there is always a tradeoff between the cost (in time/resources) of testing and the potential impact. One easy place to start is asking if the results of the test would make us change our mind about what we’re going to do next. If not, no need to run the test at all!

      A more complete answer to your question is to use an agile methodology to prioritize all the tests/work you want to do during a given time period (normally 2-4 weeks). Doing tests of relatively minor ideas is not a good use of resources compared to doing tests on critical ideas. Prioritizing the work in your backlog will help you decide if devoting resources to a non-critical issue makes sense.

      On pushing back, I’d suggest asking for the reasoning behind a test. I advocate that every test needs a hypothesis, in order to avoid throwing stuff at the wall to see what sticks (very inefficient). I don’t suggest testing just for testing’s sake, as you mention. Instead, there should be a strong hypothesis that doing something differently will lead to a dramatically different result. That is a test well worth running! Hope that helps and please feel free to comment again with more specific details if you’d like.

Leave a Reply

Your email address will not be published. Required fields are marked *

*