• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Lars Lofgren

Building Growth Teams

  • About Me
  • What I’ve Read
  • Want Help?

Growth

My 7 Rules for A/B Testing That Triple Conversion Rates

September 11, 2015 By Lars Lofgren 16 Comments

I really don’t care how any given A/B test turns out.

That’s right. Not one bit.

But wait, how do I double or triple conversion rates without caring how a test performs?

I actually care about the whole SYSTEM of testing. All the pieces need to fit together just right. If not, you’ll waste a ton of time A/B testing without getting anywhere. This is what happens to most teams.

But if you do it right. If you play by the right rules. And you get all the pieces to fit just right, it’s simply a matter of time before you triple conversions at any step of your funnel.

I set up my system so that the more I play, the more I win. I stack enough wins on top of each other that conversion rates triple. And any given test can fail along the way. I don’t care.

What does my A/B testing strategy look like? It’s pretty simple.

  • Cycle through as many tests as possible to find a couple of 10-40% wins.
  • Stack those wins on top of each other in order to double and triple conversion rates.
  • Avoid launching any false winners that drag conversions back down.

For all this to work, you’ll need to follow 7 very specific rules. Each of them is critical. Skip one and the whole system breaks down. Follow them and you’ll drive your funnel relentlessly up and to the right.

Rule 1: Above all else, the control stands

I look at A/B tests very differently from most people.

Usually, when someone runs a test, they’ll consider each of their variants as equals. The control and the variant are both viable and their goal is to see which one is better.

I can’t stand that approach.

We’re not here for a definitive answer. We’re here to cycle through tests to find a couple of big winners that we can stack on top of each other.

If there’s a 2% difference between the variant and the control, I really don’t care which one is the TRUE winner. Yes, yes, yes, I’d care about a 2% win if I had enough data to hit statistical significance on those tests (more on this in a minute). But unless you’re Facebook or Amazon, you probably don’t have that kind of volume. I’ve worked on multiple sites with more than 1 million visitors/month and it’s exceedingly rare to have enough data hitting a single asset in order to detect those kinds of changes.

In order for this to system to work, you have to approach the variant and control differently. Unless a variant PROVES itself as a clear winner, the control stands. In other words, the control is ALWAYS assumed to be the winner. The burden of proof is on the variant. No changes unless the variant wins.

This ensures that we’re only making positive changes to assets going forward.

Rule 2: Get 2000+ people through the test within 30 days

So you don’t have any traffic? Then don’t A/B test. It’s that simple. Do complete revamps on your assets and then eyeball it.

Remember, we need the A/B testing SYSTEM working together. And we’re playing the long-term. Which means we need a decent volume of data so we can cycle through a bunch of different test ideas. If it takes you 6 months to run a single test, you’ll never be able to run enough tests to find the few winners.

In general, I look for 2000 or more people hitting the asset that I’m testing within 30 days. So if you want to A/B test your homepage, it better get 2000 unique visitors every month. I even prefer 10K-20K people but I’ll get started with as little as 2000/month. Anything less than that and it’s just not worth it.

Rule 3: Always wait at least a week

Inside of a week, data is just too volatile. I’ve had tests with 240% improvements at 99% certainty within 24 hours of launching the test. This is NOT a winner. It always comes crashing down. Best-case scenario, it’s really just a 30-40% win. Worse case, it flip-flops and is actually a 20% decline.

It also lets you get a full weekly cycle worth of data. Visitors don’t always behave the same on weekends as they do during the week. So a solid week’s worth of data gives you a much more consistent sample set.

Here’s an interesting result that I had on one of my tests. Right out of the gate, it looked like I a had 10% lift. After a week of running the test, the test does a COMPLETE flip-flop on me and becomes a 10% loser (at 99% certainty too):

Flip Flip A/B Test

One of my sneaking suspicions is that most of the 250% lift case studies floating around the interwebs are just tests that had extreme results in the first few days. And if they had ran a bit longer, they would have come down to a modest gain. Some of them would even flip-flop into losers. But because people declare winners too soon, they run around on Twitter declaring victory.

Rule 4: Only launch variants at 99% statistical significance

Wait, 99%? What happened to 95%?

If you’ve done an A/B test, you’ve probably run across the recommendation that you should wait until you hit 95% significance. That way, you’ll only pick a false winner 1 out of every 20 tests. And none of us want to pick losers so we typically follow this advice.

You’ve run a bunch of A/B tests. You find a bunch of wins. You’re proud of those wins. You feel a giant, happy A/B testing bubble of pride.

Well, I’m going to pop your A/B testing bubble of pride.

Your results didn’t mean anything. You picked a lot more losers than just 1 in 20. Sorry.

Let’s back up a minute. Where does the 95% statistical significance rule come from?

Dig up any academic or scientific journal that that has quantitative research and you’ll find 95% statistical significance everywhere. It’s the golden standard.

When marketers started running tests, it was a smart move to use this same standard to see if our data actually told us anything. But we forgot a key piece along the way.

See, you can’t just run a measure of statistical confidence on your test after it’s running. You need to determine your sample size first. We do this by deciding the minimal improvement that we want to detect. Something like 5% or 10%. Then we can figure out the statistical power needed and from there, determine our sample size. Confused yet? Yeah, you kind of need to know some statistics to do this stuff. I need to look it up in a textbook each time it comes up.

So what happens if we skip all the fancy shmancy stats stuff and just run tests to 95% confidence without worrying about it? You come up with false positives WAY more frequently than just 1 out of 20 tests.

Here’s an example test I ran. In the first two days, we got a 58.7% increase in conversions at 97.7% confidence:

Chasing Statistical Significance with A/B Tests - 2 Day Results

That’s more than good enough for most marketers. Most people I know would have called it a winner, launched it, and moved on.

Now let’s fast-forward 1 week. That giant 58.7% win? Gone. We’re at a 17.4% with only 92% confidence:


Chasing Statistical Significance with A/B Tests - 1 Week Results

And the results after 4 weeks? Down to a 11.7% win at 95.7% certainty. We’ve gone from a major win to a marginal win in a couple of weeks. It might stabilize here. It might not.

Chasing Statistical Significance with A/B Tests - 4 Week Results

We have tests popping in and out of significance as they collect data. This is why determining your required sample size is so important. You want to make sure that a test doesn’t trick you early on.

But Lars! It still looks like a winner even if it’s a small winner! Shouldn’t we still launch it? There are two problems with launching early:

  1. There’s no guarantee that it would have turned out a winner in the long run. If we had kept running the test, it might have dropped even further. And every once in awhile, it’ll flip-flop on you to become a loser. Then we’ve lost hard-earned wins from previous winners.
  2. We would have vastly over-inflated the expected impact on the business. A 60% win moves mountains. They crush your metrics and eat board decks for breakfast. 11% wins, on the other hand, have a much gentler impact on your growth. They give your metrics a soothing spa package and nudge them a bit in the right direction. Calling that early win at 60% gets the whole team way too excited. Those same hopes and dreams get crushed in the coming weeks when growth is far more modest. Do that too many times and people stop trusting A/B test results. They’ll also take the wrong lessons from it and start focusing on elements that don’t have a real impact on the business.

So what do we do if 95% statistical significance is unreliable?

There’s an easier way to do all this.

While I was at Kissmetrics, I worked with our Growth Engineer, Will Kurt, at the time. He’s a wicked smart guy that runs his own statistics blog now.

We modeled out a bunch of A/B testing strategies over the long term. There’s a blog post that goes over all our data and I also did a webinar on it. How does a super disciplined academic research strategy compare to the fast and lose 95% online marketing strategy? What if we bump it to 99% statistical significance instead?

We discovered that you’d get very similar results over the long term if you just used a 99% statistical significance rule. It’s just as reliable as the academic research strategy without needed to do the heavy stats work for each test. And using 95% statistical significance without a required sample size isn’t as reliable as most people think it is.

The 99% rule is the cornerstone of my A/B testing strategy. I only make changes at 99% statistical significance. Any less than that and I don’t change the control. This reduces the odds of launching false winners to a more manageable level and allows us to stack wins on top of each other without accidentally negating our wins with a bad variant.

Rule 5: If a test drops below a 10% lift, kill it.

Great, we’re now waiting for 99% certainty on all our tests.

Doesn’t that dramatically increase the time it takes to run all our tests? Indeed it does.

Which is why this is my first kill rule.

Again, we care about the whole system here. We’re cycling to find the winners. So we can’t just let a 2-5% test run for 6 months.

What would you rather have?

  • A confirmed 5% winner that took 6 months to reach
  • A 20% winner after cycling through 6-12 tests in that same 6 month period

To hell with that 5% win, give me the 20%!

So the longer we let a test run, the higher that our opportunity costs start to stack up. If we wait too long, we’re forgoing serious wins that we could of found by launching other tests.

If a test drops below a 10% lift, it’s now too small to matter. Kill it. Shut it down and move on to your next test.

What if we have a 8% projected win at 96% certainty? It’s SO close! Or what if we have enough data to find 5% wins quickly?

Then we ask ourselves one very simple question: will this test hit certainty within 30 days? If you’re 2 weeks into the test and close to 99% certainty, let it run a bit longer. I do this myself.

What happens at day 30? That leads us to our next kill rule.

Rule 6: If no winner after 1 month, kill it.

Chasing A/B test wins can be addictive. JUST. ONE. MORE. DAY. OF. DATA.

We’re emotionally invested in our idea. We love the new page that we just launched. And IT’S SO CLOSE TO WINNING. Just let it run a bit longer? PLEEEEEASE?

I get it, each of these tests becomes a personal pet project. And it’s heartbreaking to give up on it.

If you have a test that’s trending towards a win, let it keep going for the moment. But we have to cut ourselves off at some point. The problem is that a many of these “small-win” tests are mirages. First they look like 15% wins. Then 10%. Then 5%. Then 2%. The more data you collect, the more that the variant converges with your control.

CUT YOURSELF OFF. We need a rule that keeps our emotions in check. You gotta do it. Kill that flop of a test and move on to your next idea.

That’s why I have a 30-day kill rule. If the variant doesn’t hit 99% certainty by day 30, we kill it. Even if it’s at 98%, we shut it down on the spot and move on.

Rule 7: Build your next test while waiting for your data

Cycling through tests as fast as we can is the name of the game. We need to keep our testing pipeline STACKED.

There should be absolutely NO downtime between tests. How long does it take you to build a new variant? Starting with the initial idea, how long until it goes live? 2 weeks? 3 weeks? Maybe even an entire month?

If you wait to start on the next test until the current test is finished, you’ve wasted enough data for 1-2 other tests. That’s 1-2 other chances that you could of found that 20% win to stack on top of your other wins.

Do not waste data. Keep those tests running at full speed.

As soon as one test comes down, the next test goes up. Every time.

Yes, you’ll need to get a team in place to dedicate to A/B tests. This is not a trivial amount of work. You’ll be launching A/B tests full time. And your team will need to be moving at full-speed without any barriers.

If it were easy, every one would be doing it.

Follow All 7 A/B Testing Rules to Consistently Drive Conversion Up and to the Right

Follow the system with disciple and it’s a matter of time before you double or triple your conversion rates. The longer that you play, the more likely you’ll win.

Here are all the rules in one spot:

  1. Above all else, the control stands
  2. Get 2000+ people through the test within 30 days
  3. Always wait at least a week
  4. Only launch variants at 99% certainty
  5. If a test drops below a 10% lift, kill it.
  6. If no winner after 1 month, kill it.
  7. Build your next test while waiting for your data

How Live Chat Tools Impact Conversions and Why I Launched a Bad Variant

July 21, 2015 By Lars Lofgren 15 Comments

Do those live chat tools actually help your business? Will they get you more customers by allowing your visitors to chat directly with your team?

Like most tests, you can come up with theories that sound great for both sides.

Pro Live Chat Theory: Having a live chat tool helps people answer questions faster, see the value of your product, and will lead to more signups when people see how willing you are to help them.

Anti Live Chat Theory: It’s one more element on your site that will distract people from your primary CTAs so conversions will drop when you add it to your site.

These aren’t the only theories either, we could come up with dozens on both sides.

But which is it? Do signups go up or down when you put a live chat tool on the marketing site of your SaaS app?

It just so happens I ran this exact test while I was at Kissmetrics.

How We Set Up the Live Chat Tool Test

Before we ran the test, we already had Olark running on our pricing page. The Sales team requested it and we launched without running it through an A/B test. Anecdotally, it seemed helpful. An occasional high-quality lead would come through and it would help our SDR team disqualify poor leads faster.

Around September 2014, the Sales team started pushing to have Olark across our entire marketing site. Since I had taken ownership of signups, our marketing site, and our A/B tests, I pushed back. We weren’t just going to launch it, it needed to go through an A/B test first. I was pro-Olark at this point but wanted to make sure we weren’t cannibalizing our funnel by accident.

We got it slotted for an A/B test in Oct 2014 and decided to test it on 3 core pages of our marketing site: our Features, Customers, and Pricing pages.

Our control didn’t have Olark running at all. This means that we stripped it from our pricing page for the control. Only the variant would have Olark on any pages.

Here’s what our Olark popup looked like during business hours:

Kissmetrics Olark Popup Business Hours

And here it is after-hours:

Kissmetrics Olark Popup After Hours

Looking at the popups now, I wish and I done a once-over with the copy. It’s pretty bland and generic. That might have gotten us better results. At the time, I decided to test whatever Sales wanted since this test was coming from them.

Setting up the A/B test was pretty simple. We used an internal tool to split visitors into variants randomly (this is how we ran most of our A/B tests at Kissmetrics). Half our visitors randomly got Olark, the other half never saw it. Then we tagged each group with Kissmetrics properties and used our own Kissmetrics A/B Test Report to see how conversions changed in our funnel.

So how did the data play out anyway?

Not great.

Our Live Chat A/B Test Results

Here’s what Olark did to our signups:

Live Chat Tool Impact on Signup Conversions

A decrease of 8.59% at 81.38% statistical significance. I can’t say that we have a confirmed loser at this point. I prefer 99% statistical significance for those kinds of claims. But that data is not trending towards a winner.

How about activations? Did it improve signup quality and get more people to install Kissmetrics? That step of the funnel looked even worse:

Live Chat Tool Impact on Activations

A 22.14% decrease on activations at 97.32% statistical significance. Most marketers would declare this as a confirmed loser since we hit the 95% statistical significance threshold. Even if you push for 99% statistical significance, the results are not looking good at this point.

What about customers? Maybe it increased the total number of new customers somehow? I can’t share that data but the test was inconclusive that far down the funnel.

The Decision – Derailed by Internal Politics

So here’s what we know:

  • Olark might decrease signups by a small amount.
  • Olark is probably decreasing Kissmetrics installs.
  • The impact on customer counts is unknown.

Seems like a pretty straightforward decision right? We’re looking at possible hits on signups and activations, then a complete roll of the dice on customers. These aren’t the kind of odds I like to play with. Downside at the top of the funnel with a slim chance of success at the bottom. We should of taken it down right?

Unfortunately, that’s not what happened.

Olark is still live on the Kissmetrics site 9 months after we did the test. If you go to the pricing page, it’s still there:

Kissmetrics Life Chat Tool on Pricing Page

Why wouldn’t we kill a bad test? Why would we let a bad, risky variant live on?

Internal politics.

Here’s the thing: just because you have data doesn’t mean that decisions get made rationally.

I took these test results to one of our Sales directors at the time and said that I was going to take Olark off the site completely. That caused a bit of a firestorm. Alarms got passed up the Sales chain and I found myself in a meeting with the entire Sales leadership.

I wanted Olark gone. Sales was 100% against me.

Live chat is considered a best practice (or at least it was a best practice at one point). It’s a safe choice for any SaaS leadership team. I have no idea HOW it became a best practice considering the data I found but that’s not the point. There’s plenty of best practices that sound great but actually make things worse.

Here’s what the head of Sales told me: “Salesforce uses live chat so it should work for us too.”

But following tactics from industry leaders is the fastest path to mediocrity for a few reasons:

  • They might be testing it themselves to see if it works, you don’t know if it’s still mid-test or a win they’ve decided to keep.
  • They might not have tested it, they could be following best practices themselves and have no idea if it actually helps.
  • They may have gotten bad data but decided to keep it because of internal politics.
  • Even if it does work for them, there’s no guarantee that it’ll work for you. I’ve actually found most tactics to be very situational. There’s a few cases where a tactic helps immensely but most of the time it’s a waste of effort and has no impact.

It’s also difficult to understand how a live chat tool would decrease conversions. Maybe it’s a distraction, maybe not. But when you see good opportunities come in as an SDR rep that help you meet your qualified lead quotas, it’s not easy to separate that anecdotal experience from the data on the entire system.

But none of this mattered. Sales was completely adamant about keeping it.

The ambiguity on customer counts didn’t help either. As long as it was an unknown, arguments could still be made in favor of Olark.

Why didn’t I let the test run longer and get enough data on how it impacted new customer counts? With how close the data was, we would have needed to run the test for several months before getting anywhere close to an answer. Since I had several other tests in my pipeline, I faced serious opportunity costs if I let the test run. Running one test for 3 months means not running 3-4 other tests that have a chance at being major wins.

So I faced a choice. I could have removed Olark if I was stubborn enough. My team had access to the marketing site, Sales didn’t. But standing my ground would start an internal battle between Marketing and Sales. It’d get escalated to our CEO and I’d spend the next couple of weeks arguing in meetings instead of trying to find other wins for the company. Regardless of the final decision, the whole ordeal would fray relationships between the teams. I’d also burn a lot of social capital if I decided to push my decision through. With the decrease in trust, there would be all sorts of long-term costs that would prevent us executing effectively on future projects.

I pushed back and luckily got agreement for not launching it on the Features or Customers pages. But Sales wouldn’t budge on the Pricing page. I chose to let it drop and it lives to this day.

That’s how I launched a variant that decreased conversions.

Should You Use a Live Chat Tool on Your Site?

Could a live chat tool increase the conversions on your site? Possibly. Just because it didn’t work for me doesn’t mean it won’t work for you.

Are there other places that I would place a live chat tool? Maybe a support site or within a product? Certainly. There are plenty of cases where acquisition matters less than helping people as quickly as possible.

Would I use a live chat tool at an early stage startup to collect every possible bit of feedback I could? Regardless of what it did to signups? Most definitely. Any qualitative feedback at this stage is immensely valuable as you iterate to product/market fit. Sacrificing a few signups is well worth the cost of being able to chat will prospects.

If I was trying to increase conversions to signups, activations, and customers, would I launch a live chat tool on a SaaS marketing site without A/B testing it first? Absolutely not. Since this test didn’t go well, I wouldn’t launch a live chat tool without conclusive data proving that it helped conversions.

Olark and the rest of the live chat companies have great products. There’s definitely ways for them to add a ton of value. Getting lots of qualitative feedback at an early stage startup is probably the strongest use case that I see. But if your goal is to increase signups, activations, and customers, I’d be very careful with assuming that a live chat tool will help you.

How to Keep Riding the Slack Rocketship Without Blowing It Up

March 7, 2015 By Lars Lofgren 8 Comments

Growth graphs like this don’t come around too often:

slack-1year-Feb12-2015-dau

I’ve gotta hand it to Slack, they’re playing the PR game pretty well too. Headline after headline on Techcrunch and Techmeme keep telling us all how crazy the growth is over there. They’ve already built the brand to go with it.

But here’s the thing about rocketships: they either go further than you ever thought possible or they blow up in your face.

It all comes down to momentum. Keep it up and you’ll quickly dominate your market.

But once you start to lose momentum, the rocketship rips itself apart. Competitors catch up, market opportunities slip away, talent starts to leave, and growth stalls. It’s a nasty feedback loop that’ll do irrecoverable damage. Once that rocketship lifts off, you either keep accelerating growth or momentum slips away as you come crashing back down. Grow or die.

There’s no room for mistakes. But when driving growth, there’s 2 forces that will consistently try to bring you crashing back down.

1) The Counter-Intuitive Nature of Growth

At KISSmetrics, I launched a bunch of tests that didn’t make a single bit of sense when you first looked at them. And many of them were our biggest winners.

Here’s a good example. Spend any time learning about conversion optimization and you’ll come across advice telling you to simplify your funnel. Get rid of steps, get rid of form fields, make it easier. Makes sense right? Less effort means more people get to the end. In some cases, this is exactly what happens.

In other cases, you’ll grow faster by adding extra steps. Yup, make it harder to get to the end and more people finish the funnel. This is exactly what happened during one of our homepage tests. We added an entire extra step to our signup flow and instantly bumped our conversions to signups by 59.4%.

Here’s the control:

Home Page Ouath Test Control

Here’s the variant:

Home Page Oauth Test Variant

In this case, the variant dropped people straight into a Google Oauth flow. But we didn’t get rid of our signup form since we still needed people to fill out the lead info for our Sales team.

Number of steps on the control:

  1. Homepage
  2. First part of signup form
  3. Second part of signup form (appeared as soon as you finished the first 3 fields)

Number of steps on the variant:

  1. Homepage
  2. Google account select
  3. Google oauth verification
  4. Signup form completion

You could say the minimalist design on that variant helped give us a winner which is true. But we saw this “add extra steps to get more conversions” across multiple tests. Works like magic on homepages, sign up forms, and webinar registrations. It’s one of my go-to growth hacks at this point.

Counter-intuitive results come up so often that it’s pretty difficult to call winners before you launch a test. At KISSmetrics, we had some of the best conversion experts in the industry like Hiten Shah and Neil Patel. Even with this world-class talent, we STILL only found winners 33% of the time. That’s right, the majority of our tests FAILED.

We ran tests that I would have put good money on. And guess what? They didn’t move our metrics even a smidge. The majority of our tests made absolutely zero impact on our growth.

It takes a LOT of testing to find wins like this. So accelerating growth isn’t a straight-forward process. You’ll run into plenty of dead-ends and rabbit holes as you learn which levers truly drive growth for your company.

2) You’ll Get Blind-Sided by False Positives

Fair enough, growth is counter-intuitive. Let’s just A/B test a bunch of stuff, wait for 95% statistical significance, and then launch the winners. Problem solved!

Not so fast…

That 95% statistical significance that you’ve placed so much faith in? It’s got a major flaw.

A/B tests will lead you astray if you’re not careful. In fact, they’re a lot riskier than most people realize and are riddled with false positives. Unless you do it right, your conversions will start bouncing up and down as bad variants get launched accidentally.

Too much variance in the system and too many false positives means you’re putting a magical growth rate at serious risk. Slack wants to get as much risk off the table while still chasing big wins. And the normal approach to statistical significance doesn’t cut it. 95% statistical significance launches too many false positives that will drag down conversions and slow momentum.

Let’s take a step back. 95% statistical significance comes from scientific research and is widely accepted as the standard for determining whether or not two different data sets are just random noise. But here’s what gets missed: 95% statistical significance only works if you’ve done several other key steps ahead of time. First you need to determine the minimum percentage improvement that you want to detect. Like a 10% change for example. THEN you need to calculate the sample size that you need for your experiment. The results don’t mean anything until you hit that minimum sample size.

Want to know how many acquisition folks calculate the sample size based on the minimum difference in conversions that they want to detect? Zero. I’ve never heard of a single person doing this. Let me know if you do, I’ll give you a gold star.

But I don’t really blame anyone for not doing all this extra work. It’s a pain. There’s already a ton of steps needed in order to launch any given A/B test. Hypothesis prioritization, estimating impact on your funnel, copy, wireframes, design, front-end and back-end engineering, tracking, making the final call, and documentation. No one’s particularly excited about adding a bunch of hard-core statical steps to the workflow. This also bumps the required sample sizes for conversions into the thousands. Probably not a major problem for Slack at this point but it will dramatically slow the number of tests that they can launch. Finding those big wins is a quantity game. If you want higher conversions and faster viral loops, it’s all about finding ways to run more tests.

When you’re in Slack’s position, the absolute last thing you want to do is expose yourself to any unnecessary variance in your funnel and viral loops. Every single change needs to accelerate the funnel like clockwork. There’s too much at stake if any momentum is lost at this point. So is there another option other than doing all that heavy duty stats work for each A/B test? Yes there is.

The Key to Keeping Rocketships Flying

Right now, the team at Slack needs to be focusing on one thing: how not to lose.

Co-founder Stewart Butterfield mentioned that he’s not sure where the growth is coming from. This is a dangerous spot to be in. As they start to dive into their funnel, there’s a serious risk of launching bad winners from false positives. They’ll need every last bit of momentum if they want to avoid plateauing early.

As it turns out, there is a growth strategy that takes these A/B testing risks off the table. It’s disciplined, it’s methodical, and it finds the big wins without exposing you to the normal volatility of A/B testing. I used it at KISSmetrics to grow our monthly signups by over 267% in one year.

Here’s the key: bump your A/B decision requirement to 99% statistical significance. Don’t launch a variant unless you hit 99%. If you’re at 98.9% or less, keep the control. And run everything you can through an A/B test.

Dead serious, the control reigns unless you hit 99% statistical significance. You’ll be able to keep chasing counter-intuitive big wins while protecting your momentum.

At KISSmetrics, we actually did a bunch of Monte Carlo simulations to compare different A/B Testing strategies over time.

I’ve posted the results from 3 different strategies below. Basically, the more area under the curve means the more conversions you earned. Each dot represents a test that looked like a winner. You’ll notice that many dots actually bring the conversions down. This comes from false positives and not being rigorous enough from your A/B testing.

Here’s what you get if you use the scientific researcher strategy:

conversion-rate-vs-observations-600

Not much variance in this system. Winners are almost always real winners.

Here’s your regular sloppy 95% statistical significance strategy that makes changes as early as 500 people in the test:

the-impatient-marketer-600

Conversions bounce around quite a bit. False wins come up often which means that if you sit on particular variation for long, it will drag those conversions down and slow growth. There goes your momentum.

Now let’s look at the 99% strategy that waits for at least 2000 people in the test for a decent sample size:

the-realist-conversion-rates-600

Still a chance to pick up false winners here but a lot less variance than 95%. Let’s quantify all 3 strategies real quick by calculating the area under the curve. Then we’ll be able to compare them instead of just eye-balling the simulations.

  • Statistical researcher = 67759
  • 95% statistical significance = 60532
  • 99% statistical significance = 67896

Bottom line: the 99% strategy performs just as well as the scientific researcher and a lot better than the sloppy 95%. It’s also easy enough for any team to implement without having to do the extra stats work.

The 99% rule is my main A/B testing rule but here are all of them:

  • Control stands unless the variant hits a lift at 99% statistical significance.
  • Run the test for at least a week to get a full business cycle.
  • Get 2,000 people through the test so you have at least a halfway decent sample size.
  • If the test looks like a loser or has an expected lift of less than 10%, kill it and move on to the next test. Only chase single digit wins if you have a funnel with crazy volume.

I used these rules to double and triple multiple steps of the KISSmetrics funnel. They reduce the risk of damaging the funnel to the bare minimum, accelerate the learning of your team, and uncover the biggest wins. That’s how you keep your growth momentum.

Embedding A/B Tests Into the Slack Culture

I can give you the rules for how to run a growth program. But you know what? It won’t get you very far unless you instill A/B tests into the fabric of the company. The Slack team needs to pulse with A/B testing. Even the recruiters and support folks need to get excited about this stuff.

This is actually where I failed at KISSmetrics. Our Growth and Marketing teams understood A/B testing and our entire philosophy behind it. We cranked day in and day out. It was our magic sauce.

But the rest of the team? Did Sales or Support ever get it? Nope. Which meant I spent too much time fighting for the methodology instead of working on new tests. If I had spent more time bringing other teams into the fold from the beginning, who knows how much further we could have gone.

If I was at Slack, one of my main priorities would be to instill A/B testing into every single person at the company. Here’s a few ideas on how I’d pull that off:

  • Before each test, I’d show the entire company what we’re about to test. Then have everyone vote or bet on the winners. Get the whole company to put some skin in the game. Everyone will get a feel for how to accelerate growth consistently.
  • Weekly A/B testing review. Make sure at least one person from each team is there. Go through all the active A/B tests, current results, which one’s finished, final decisions, and what you learned from them. The real magic of A/B testing comes from what you’re learning on each test so spread these lessons far and wide.
  • Do monthly A/B testing growth talks internally. Include the rules for testing, why you A/B test, the biggest wins, and have people predict old tests so they get a feel for how hard it is to predict ahead of time. Get all new hires into these. Very few people have been through the A/B test grind, you need to get everyone up to speed quickly.
  • Monthly brainstorm and review of all the current testing ideas in the pipe. Invite the whole company to these things. Always remember how hard it is to predict the big winners ahead of time, you want testing ideas coming at you from as many sources as possible.

Keep Driving The Momentum of That Slack Rocketship

I’m really hoping the team at Slack has already found ways to avoid all the pitfalls above. They’ve got something magical and it would be a shame to lose it.

To the women and gents at Slack:

  • Follow the data.
  • Get the launch tempo as high as possible for growth, you’ll need to run through an awful lots of ideas before you find the ones that truly make a difference.
  • Only make changes at 99% statistical significance.
  • Spread the A/B testing Koolaid far and wide.
  • Don’t settle. You’ve got the magic, do something amazing with it.

Two Mistakes I Made on the Engines of Growth

May 30, 2014 By Lars Lofgren 12 Comments

In The Lean Startup, Eric Ries describes 3 engines of growth:

  • The Sticky Engine
  • The Viral Engine
  • The Paid Engine

In a post of mine, I claimed that there’s really only 2 engines. Short and sweet summary:

There aren’t 3 engines of growth, there’s only 2: organic and paid. I lumped the word-of-mouth into the viral engine and explained how retention doesn’t drive growth which was the main focus of the sticky engine. This is because churn scales with your acquisition so if you only focus on retention, your growth will stall regardless of how low you get it. So there’s no reason to have a sticky engine of growth.

But I made two mistakes.

One was an outright error on my part. The other as an omission which adds a bit of nuance.

Eric Ries even responded to the post (which was awesome) with 4 tweets:

@__tosh @LarsLofgren couple disagreements: 1) viral growth is special, where invites happen as a necessary side effect of product usage

— Eric Ries (@ericries) April 11, 2014

@__tosh @LarsLofgren 2) your analysis of churn presupposes that WoM is constant, but it’s not: it is proportional to size of customer base

— Eric Ries (@ericries) April 11, 2014

@__tosh @LarsLofgren 3) I never said that churn produces growth. Sticky growth compounds if WoM > churn.

— Eric Ries (@ericries) April 11, 2014

@__tosh @LarsLofgren 4) lumping different forms of acquisition into the “organic” bucket is a mistake I see hurt a lot of startups

— Eric Ries (@ericries) April 11, 2014

Error #1: Blending Word of Mouth and Viral Engines

Eric Ries clearly separates viral from word-of-mouth (organic) engines of growth. I incorrectly lumped them together. In the first paragraph of his section on viral engines of growth, Ries states:

“This is distinct from the simple word-of-mouth growth discussed above [the sticky engine of growth]. Instead, products that exhibit viral growth depend on person-to-person transmission as a necessary consequence of normal product use. Customers are not intentionally acting as evangelists; they are not necessarily trying to spread the word about the product. Growth happens automatically as a side effect of customers using the product.”

Page 212 of The Lean Startup if you’re curious.

Fair enough, viral and word-of-mouth engines aren’t the same. One depends on delighting customers to the point where they voluntarily tell others about you. Viral engines depend on making the product visible to others as each customer uses it.

Keeping viral and word-of-mouth engines separate makes a lot of sense.

That’ll teach me to build off of frameworks while skimming them instead of reading the entire chapter again.

My bad.

Error #2: Ommiting that the Sticky Engine Scales When Word-of-Mouth Exceeds Churn

Eric Ries focuses pretty heavily on getting retention as high as possible for the sticky engine of growth. Page 212 in the Lean Startup:

“[For an engagement business] its focus needs to be on improving customer retention. This goes against the standard intuition in that if a company lacks growth, it should invest more in sales and marketing.”

In my post, I showed that growth hits a ceiling no matter how low you get your churn. This is because churn will eventually match your current acquisition rate. Even if you lower churn, your growth looks like this:

10_Percent_Monthly_Churn_Reduced_to_7_Small

There’s one main exception to this.

When you get a word-of-mouth growth rate to exceed your churn rate, you’ll grow exponentially. Even though your churn grows each month, so does your word-of-mouth. Then you get a nice compounding growth rate that accelerates over time. Ries points this out on page 211:

“The rules that govern the sticky engine of growth are pretty simple: if the rate of new customer acquisition exceeds the churn rate, the product will grow.”

But this doesn’t change my primary point: churn is not the key to growth for the sticky engine. Accelerating word-of-mouth is the key instead of churn. Getting customers to keep using your product is one thing. Getting them to put their own reputation on the line by recommending you is another hurdle entirely. You’ll still hit low churn long before you see any substantial word-of-mouth.

Ries does bring up an example of a business that has 40% churn and 40% acquisition at the same time. And when your churn matches your acquisition, you stall. He focuses on lowering churn to get the sticky engine running. But I’m skeptical that the acquisition is coming from actual word-of-mouth. With churn that high, I’d expect the acquisition to be from conventional sales and marketing channels that don’t scale with churn. And if that’s the case, lowering churn is only the first step. You’ll hit a new ceiling since your acquisition won’t scale as easily as churn does.

If you have worked with a business that achieved high rates of growth from word-of-mouth but also had high rates of churn, I’d love to hear about it. Be sure to let me know in the comments.

Once again, churn is just one piece of the puzzle. You’ll still need to keep refining your product and improving your customer support long after you achieve low churn. Word-of-mouth requires delighting customers at an entirely different level than what it takes to keep them around. In other words, low churn is the first step to word-of-mouth growth. It grows your average customer value and extends your runway. But you’ll need to keep pushing in order to get your word-of-mouth high enough that it outpaces churn. Then, and only then, will you have a sticky growth engine.

If you focus on delighting customers to the point where you get a sizable amount of word-of-mouth growth, you’ll hit low churn along the way.

To recap, you have two options when your growth stalls:

  1. Find a way to accelerate your current acquisition with paid or viral engines (you’ll eventually hit another plateau unless you keep accelerating it)
  2. Focus on your product and customer support to increase word-of-mouth (and lower churn along the way).

Lowing your churn will make either strategy more viable. You’ll either start growing exponentially at a lower rate of word-of-mouth or you’ll lower the demands on your acquisition which makes it easier to outpace churn.

Sorry Eric Ries, There’s Only Two Engines of Growth

April 10, 2014 By Lars Lofgren 6 Comments

UPDATE: I actually made 2 errors with this post. I decided to correct them with a new post so you can see exactly what happened. See the mistakes I made in this post over here.

One of my all-time favorite books for startups is Eric Ries’ The Lean Startup.

In it, Eric Ries breaks down three engines of growth:

1. The Viral Engine of Growth – Word of mouth or viral invite system

2. The Paid Engine of Growth – Pay for each customer through ads or marketing systems

I’m mostly in agreement with Ries on these two. You’re either going to have to pay for your customers or you’ll need to grow from word-of-mouth/viral invite systems.

But the third system of growth really isn’t a system of growth:

3. The Sticky Engine of Growth – Keep customers engaged over the long term and reduce churn

This applies to two types of businesses, subscriptions and user engagement. Software-as-a-Service companies use subscriptions so the longer people stay subscribed, the more money they make. For consumer tech companies like Twitter, Instagram, or Facebook, they rely on user engagement so they can monetize their users with ads. In both cases, the business benefits as users keep using the product over the long term.

The strategy for this growth engine is pretty straightforward: reduce your churn to increase the value of your customers. You do this by keeping customers engaged and lowering the percentage that leave in any given month (your churn rate).

But Ries’ Sticky Engine of Growth doesn’t actually produce growth that scales.

You Can’t Get Hockey Stick Growth By Only Attacking Churn

Churn is not a path to growth. It simply raises your growth limit. It’s the ceiling that lets you keep playing. It buys you time and gives you more breathing room.

But if you want hockey-stick growth, you’ll need to build another engine of growth WHILE attacking your churn rate.

Here’s the problem: when you have a “sticky” business and need long-term customer engagement, churn puts your business into a constant rate of decay.

Churn nips at your heels, rots your customer base, and will deadweight your company if you’re not careful.

Let’s do a quick example.

Say you have a 10% churn rate for a SaaS app. Let’s also say that you’ve found a way to acquire 100 customers per month. Here’s what happens to your growth if you keep your acquisition constant:

10_Percent_Monthly_Churn_Small

Early on, the 10% churn doesn’t really matter. Your 100 new customers easily make up for it. But once you get to 1000 customers, you churn rate equals your acquisition rate. Within 2 years, your business has stalled.

In order to beat churn, you have to keep accelerating your growth. Even if you have 1-2% churn (the goal for SaaS companies), your growth will consistently slow down unless you build another engine to accelerate it. Churn doesn’t get you to the next level, it simply let’s you take another shot.

Now let’s say that we reduce the churn rate from 10% to 7.5% after 6 months. Here’s how your growth differs from the first example:

10_Percent_Monthly_Churn_Reduced_to_7_Small

See how you hit that next ceiling after an small spike? When people talk about growth from lower churn, it’s that initial spike since the growth rate now exceeds the churn rate. But it doesn’t take long for the new churn rate to catch up and stall the business again.

No matter how low you get your churn, you’ll hit a cap sooner or later. Your growth will keep slowing down as every month goes by. The only way to accelerate growth is to build one of the primary growth engines: organic or paid.

The reason that churn is so nasty is that it quickly scales to the size of your business. 10% churn with 100 customers means that 10 customers left this month. If you somehow manage to get to 100,000 customers without addressing your churn, you’ll now be losing 10,000 customers each month. It’s fairly consistent all the way up. But marketing, sales, and growth systems don’t scale so easily. Paying for 10,000 new customers each month is an entirely different game than 10 new customers. Even viral systems don’t scale forever, they’ll start to slow and churn will catch up in a hurry.

And don’t convince yourself that you can achieve some absorb churn rate like 0.1%. Top-tier SaaS businesses are in the 1-2% range, maybe as low as 0.75%. There are hard limits on how low you can go.

Considering that most VC’s are looking for at least 100% year-over-year revenue growth rates for SaaS (consumer tech has even more absurd growth benchmarks), you need to build a growth engine that doesn’t mess around.

I’ve spent 2 years understanding the growth model of a SaaS business at KISSmetrics. While churn is one of our top priorities, we wouldn’t get very far unless we committed to building an additional growth engine. That’s why we built out our marketing and sales teams.

Maybe you double-down on product and customer service to accelerate word-of-mouth. If you’re in consumer tech, a viral invite system might work if it adds to the core value of your product. Or maybe you build a paid engine with content marketing and ad buys. Either engine can work. But you need to remember that growth won’t come just from lower churn.

Why does this matter?

If you’re building a business that relies on keeping customers engaged over time, you cannot expect to grow your company from just a low churn rate.

Churn is absolutely CRITICAL to the success of your business. But it’s only one piece of the puzzle.

Look at any SaaS business that has IPO’d recently like Marketo or Box. They all have massive marketing/sales budgets. They’re even hemorrhaging cash to keep accelerating their growth rates.

That being said, I DO agree with Ries that the primary goal of a sticky business model is to focus on customer retention. No subscription or engagement business is going to get very far unless they control their churn. You’ll hit a ceiling that won’t budge until you do. Before you can think about growth, you need to get your churn to acceptable levels. Or all your customers will leave just as fast as you acquired them.

But once you have a low churn rate, growth isn’t going to magically appear. And a business looking for high rates of growth will need to acquire customers at scale. Raising engagement will increase the value of your current customers but it won’t necessarily bring you new customers. You’ll need to delight customers to the point that word-of-mouth and virality start working in your favor. Or you’ll need to start paying for customers.

Sticky Engines Don’t Acquire Customers, They Grow Customer Value

The primary benefit of sticky engines isn’t growth, it’s an increase in customer value.

Any subscription or engagement business attempts to spread customer payments out over a long period of time. For many SaaS businesses, the goal is to keep customers subscribed for 24-36 months. By spreading payments out, you’re able to increase the value of your average customer. This is one of the main reasons that tech companies have moved to subscription payments instead of up-front software licenses. And consumer tech companies can monetize long-term, active users a lot easier with ad revenue.

In fact, a well-executed upsell and churn reduction system can give you negative churn. This means the value of your current customers is increasing faster than the value lost from customers leaving. Your total customer count drops while your revenue increases slightly. Even if you don’t acquire any more customers, your revenue will still grow. At least in the short-term.

But this isn’t considered a primary growth engine. It’s mainly a strategy to mitigate the impact of churn so you get the full benefit of your real growth engine. The revenue growth from negative churn pales in comparison to any half-decent growth engine. Negative churn will only give you marginal gains.

Won’t Better Engagement Lead to More Word-of-Mouth and Virality?

Possibly.

Word-of-mouth growth requires a level of engagement well beyond what it takes just to keep customers engaged each month. Providing enough value to keep customers interested is one thing, providing enough for them to drag their friends into the product is something else altogether.

If you’re pursuing organic growth instead of paid, many of the tactics you employ will be very similar to the tactics that you’d use to reduce churn:

  • Improve value of product
  • Reduce friction across all customer touch-points
  • Focus on a single market
  • Provide fast and helpful customer service

But you’ll need to perfect these tactics and delight your customers at a level well beyond what it takes just to reduce your churn. At that point, you’re deliberately pursuing an organic engine of growth.

That being said, I’m a huge fan of Eric Ries’ book The Lean Startup. I definitely consider it one of the classics for startups. It’s a huge inspiration for my own work, I highly recommend it.

Growth Comes From One of Two Places (And Only Two)

March 25, 2014 By Lars Lofgren 2 Comments

At the end of the day, there’s only two ways to acquire new customers.

Social, advertising, blogging, affiliates, direct mail, word-of-mouth, user invites…

All these channels fit into one of two acquisition strategies. Both can work beautifully but you need to know what game you’re in. Each requires different team structures and different strategies. Unless you pick one deliberately and have the strategy to back it, you won’t get anywhere at all.

Here they are.

The Organic Growth Engine

This is the fabled word-of-mouth we all say we do a great job at. Honestly, only a few of us build companies that truly grow from word-of mouth.

A lot of companies that reach impressive growth off of this engine proudly proclaim that they have a marketing budget of $0.

And they should be proud, it’s not easy to accelerate growth purely from word-of-mouth.

So how do you build an organic growth engine?

Double down on product and service.

Your product can’t just be good, it needs to be amazing. And your customer service can’t just solve problems, they need to provide a legendary level of service. Think of Zappos. If you go down this road, the only thing that matters is how happy you make your customers. You’ll need to dedicate a significant amount of your resources to product and customer support. You’ll need process for how to improve your product methodically every single month. You’ll need to ruthlessly perfect every detail.

You won’t achieve break-out organic growth by accident. You need to build a first-class team and product.

I should correct myself. Every once in awhile, a product becomes a market hit for seemingly no reason. It’s not even a great product but for whatever reason, people go nuts over it. Flappy Bird is a great example. You can’t manufacture this type of success, it’s like hitting the lottery. It might happen but you definitely don’t want to depend on it. And most of us will never experience it.

What about those viral invite systems? Don’t they count as organic growth?

Viral social networks do fit in this category. They take a great product then optimize their network effects and invite systems to spread their product as fast as possible. WhatsApp, Snapchat, Facebook, and all the other social apps that have spread like wildfire. But even with these crazy success stories, they all start with a great product that people love.

But you can’t hack virality.

Especially with the popularity of growth hacking these days, every junior marketer thinks they can build a quick invite system and follow in the footsteps of LinkedIn, Skype, or Dropbox.

Bolting an invite system to a lack-luster app isn’t going to get you anywhere. Your product needs to be good enough that it will spread even without an optimized invite system. Build an amazing product that people already want to share. Then make it even easier for them to do so.

So viral marketing engines aren’t an acquisition strategy. They merely take what’s already happening (an organic growth engine through word-of-mouth) and accelerate it by making the word-of-mouth even easier.

Even “viral” marketing campaigns don’t drive organic growth. These are in the paid engine of growth. It’s just a cheaper way to get more eyeballs on your marketing campaign. But you still have to pay for the campaign in the first place. Either out-right or with labor by having your team work on it.

The organic growth engine sounds amazing right?

After all, you’re getting customers for free. What’s not to like?

Well there’s one major downside.

You’re not in control. Your growth is entirely as the mercy of how much your customers talk about you. For many business, there isn’t a straight-forward way accelerate your acquisition at a predictable pace. Consumer tech products can usually optimize invite systems and activation rates (the number of people that start using a core feature of the product). For B2B or other products that depend heavily on word-of-mouth, you can’t systemically optimize your acquisition. You’ll need to keep going back to your product, improving it, and hoping word-of-mouth accelerates.

The only way to accelerate growth predictably is to start building a second paid growth engine to acquire customers.

Keep these points in mind if you go down the organic growth road:

  • You won’t be able to predict or reliably accelerate growth from an organic growth engine.
  • Commit as many resources as you can to product and customer service.
  • Your entire team needs to be unbelievably annal with the smallest of details. The customer experiences needs to be perfected. Iterate endlessly on every piece customer touchpoint.
  • Viral growth engines harness organic growth that’s already happening. You can’t bolt an invite system to a sub-par product and expect any results.

The Paid Growth Engine

This includes everything you currently spend on sales and marketing.

Did you just hire an field sales rep to find and close $100,000 deals? Paid growth. Building a blog to attract traffic and free trials? Paid growth. Television, Facebook, or billboard ads? Paid again.

“Organic” or inbound online marketing isn’t really an organic engine. It’s just a paid engine that you don’t pay directly for. Instead of dropping cash on ads of some kind, you hire people to write content, build systems, and attract customers with their labor. Your marketing budget is now their salaries.

Paid engines may be easier to understand (just go buy customers!) but that doesn’t make them any easier to execute.

Many paid channels simply won’t work for your target market. For whatever reason, you market won’t respond in that channel.

And the worst part is that the good channels are different for every business/market. Affiliates might work beautifully for one business but a slight change in the market can turn them into a total flop.

Let’s look at an easy example.

Targeting teenagers? The hippest social network might be a great source of growth for you. Going after senior Fortune 500 executives in their 50’s? Forget the social nonsense, try business conferences, networking, and outbound sales.

Differences get a lot more subtle than this. Hosting companies typically do really well with affiliate programs. Freelancers constantly recommend hosting for their clients so if you give them a affiliate deal and make them look good with a reliable product, it’s a great source of growth. Take the exact same affiliate program , apply it to some other SaaS app, and it completely fails.

Even worse, there’s always a learning curve with each channel.

The first time you jump into a new channel, you’re not going to do well. You not only need to learn the fundamentals of that channel, you also need to learn how your market responds to it.

This takes time and money to work through.

The only way to hack this learning curve is to find someone with experience in the channel AND your market. You can’t just get by with experience in a particular channel since that channel may not work out for you. But if you can only choose one (market or channel), find someone with experience in your market. Find the channel experts after you’ve already validated the channel and know it’ll produce profitable customers consistently.

You need to run through as many channels as you can. Test each of them thoroughly enough to make sure that any failures are the result of a poor channel and not poor execution.

With deep pockets, this isn’t a big deal. Minimize your bets so you can repeatedly test different combos until you find one that produces plenty of customers. Once you’ve got one channel going, build out dedicated sales and marketing teams to optimize and scale your paid engine.

But for startups with a limited run-way, getting through the learning curve on each channel can really suck. If you don’t move fast enough, you won’t find the winning channel before you’re out of cash. Remember that the best way to hack this process is to find someone with experience in your target market. They’ll be able to get you headed in the right direction.

Once you do find a great channel to grow from, it’s not all gumdrops and roses. Every channel has diminishing returns. You can only acquire so much traffic, buy so many ads, or run so many campaigns at a given time. Deeper pockets don’t solve this problem. Each channel has a cap on growth no matter how much cash you have at the ready.

So as soon as you find a great channel to build from, start experimenting with others to keep you growing after you hit the cap on the first.

Keep these points in mind if you go down the paid engine road:

  • It’s a margin game, make sure you can afford what you’re buying customers at.
  • Every channel works differently for each market. Just because a channel works in one market doesn’t mean it will work in yours.
  • Be wary of the learning curve in each channel. Make sure poor channel performance is from a bad fit with your market instead of poor execution.
  • To short-cut the learning curve, find someone with experience in your target market (experience with a particular channel isn’t good enough).
  • Great channels only get you so far, you’ll hit diminishing returns sooner or later. Find new channels of growth before you need them.

Building Both Growth Engines

You can build both engines. But you can’t excel at both.

This comes down to priorities. Pure and simple.

You won’t be able to build a world-class paid acquisition team while building a world-class product. Not only will you push your team in too many directions which prevents world-class execution, you’ll face plenty of decisions that force you to make trade-offs between each.

How hard do you push your sales and conversions? Do you use every spammy tactic out there to drive conversions at all costs and minimize your cost per acquisition? Or do you take it easy and focus on product? You’ll need to draw the line somewhere. You won’t be able to position yourself as the “amazing company that bends over backwards to delight customers” while spamming them upsell offers.

Think of it as a continuum. At one end of the spectrum, you have a 100% paid engine business. Your product might suck or it’s just average. But you’re able to achieve great growth rates because your have your marketing/sales machine DIALED. You’re squeezing every penny out of your acquisition process. Plenty of companies go this route.

Or you could build a 100% organic engine and focus entirely on the quality of your product. You won’t even have a marketing team. And if you dominate your industry, you’ll be able to brag about how you had a $0 marketing budget the whole time.

Either option can work well but you won’t be able to do both at the same time.

You can also blend them, maybe 70% organic and 30% paid. Build a great product that’s a clear priority for your team but you also do a few core marketing projects exceptionally well. You’re not building a paid engine at all costs, it’s there to support and accelerate an already thriving organic engine.

Apple is a blend. Obviously, the vast majority of their resources go into their products. But they also do a great job at a few key marketing tasks. Not only do their product announcements capture the attention of the entire tech industry, they’ve run great television ads like Get a Mac (I’m a Mac), Think Different (Crazy Ones), or even their newer Intention ad. They’ve made their organic growth engine the priority while spending at least a little time doing an excellent job at a few key channels.

Whichever route you decide to go, make sure to make it clear which one is your priority. When it comes time to make sacrifices, which engine gets the goods? Are you going to double down on product or hire that ace growth hacker that will drive conversions at all costs?

The Growth Blend that I Recommend

When you first get traction for your business or are trying to accelerate an established business, make sure that you have some organic growth. This doesn’t need to be industry-shattering growth but you should still see a bit of growth if you take your foot off the marketing peddle. This way you know that you have a solid product and that people want to talk about it.

Marketing gets so much easier when you build from a foundation of a great product. If your product sucks, it’s still possible to grow but your margin of error is razor thin. So start with a great product as your foundation.

Then build out marketing/sales teams to test channels, find the one’s that’ll scale, and optimize your acquisition strategy.

This would break down to a 80/20% blend. Product is the priority with marketing accelerating growth at a few key leverage points.

This is the most consistent path to growth that I’ve seen. Start by building a great product then build a focused machine to funnel customers into it as fast as you can. It’s also more fun since you’re growing a product people love, helping people solve problems, and don’t need to resort to spammy strategies.

This isn’t the only path. Pick the one that fits the vision of your company the best.

Primary Sidebar

Don’t miss any of my new essays.

Most Popular Posts

  • The Three Engines of Growth – with Eric Ries
  • The 35 Headline Formulas of John Caples
  • My 7 Rules for A/B Testing That Triple Conversion Rates
  • The 9 Delusions From the Halo Effect
  • How Live Chat Tools Impact Conversions and Why I Launched a Bad Variant
  • Sorry Eric Ries, There’s Only Two Engines of Growth
  • The 17 Copywriting Axioms of Joseph Sugarman
  • What is Permission Marketing?
  • Two Mistakes I Made on the Engines of Growth
  • How to Keep Riding the Slack Rocketship Without Blowing It Up

Copyright 2019