• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Lars Lofgren

Building Growth Teams

  • About Me
  • What I’ve Read
  • Want Help?

Why I’m Switching Web Hosts

July 5, 2019 By Lars Lofgren 2 Comments

Recently, I’ve gotten fed up with my web host.

Since I stood this site up, I’ve used MediaTemple. This was before WP Engine was around. At the time, MediaTemple had carved out a nice niche for themselves as the premium web host. Tim Ferriss also raved about them back in the day. If it was good enough for his massive website, I figured it would be good enough for me.

Everything worked great for a few years.

Then GoDaddy bought MediaTemple. I’ve never been a fan of GoDaddy for a few reasons:

  • They’re super aggressive with worthless upsells within their app
  • I’ve always had an impression that they’ve been a sleazy company
  • They supported SOPA until there was a bunch of backlash

So I was now a customer of GoDaddy. Yippie.

I vowed to switch.

Then life got in the way, as it does.

More recently, I started to notice lots of issues on my site. The biggest problem has been lost emails. A certain percentage of emails never make it to my personal email. It’s become a major headache not knowing if someone actually responded to or if their email just got lost. Maybe this is my fault somehow, maybe it’s my host. I don’t know. Regardless, I’m going to solve the problem by revamping my domain and host infrastructure since I wanted to switch anyway.

On top of some other problems that I’ve noticed, I finally decided to carve out the time and get off MediaTemple.

So where am I going next with my hosting?

I’m not sure yet.

I’ve been doing quite a bit research this post is a great breakdown on the best web hosts.

Most of my professional experience has been with sites on AWS or WP Engine.

Way back in the day, we moved the KISSmetrics blog to WP Engine from AWS. We were doing about 700,000 visitors per month then. The switch saved our engineering team a lot of ongoing maintenance and management. Even though WP Engine can be expensive, the time we saved our engineering team was worth every penny.

Our sites at KISSmetrics used this hosting structure:

  • Marketing site (kissmetrics.com) = AWS
  • Blog (blog.kissmetrics.com) = WP Engine
  • App (app.kissmetrics.com) = a bunch of custom stuff and a full DevOps team, that was way above my head

Personally, I liked that structure a lot for a SaaS business. WP Engine made perfect sense for a high traffic blog. And I never found a reason to move our marketing site to WordPress. A SaaS business needs to customize their signup flow anyway so engineers will have to be involved with the site. It’s not like a blog that can be installed with WordPress and then never touched again by the engineering team. I also see an advantage to getting the main marketing pages away from most of the marketing team by not having them easily editable. Once you spend the time to get that stuff right, it doesn’t need regular edits anyway.

Then I spent several years at I Will Teach You To Be Rich.

Our main site was on WP Engine by the time I got there. We were also spinning up a few other blogs which ended up on AWS for some reason. That drove me nuts, having blogs on different hosts. Especially for a growing team, it doubles the complexity of every hosting process since folks have to learn multiple methods. Troubleshooting also gets a lot harder.

Thankfully, we ended up getting all our blogs on WP Engine to keep things consistent.

We did have an internal app for our courses that got moved from Rackspace to AWS.

These days, I’m doing a lot of work on Quick Sprout which currently does about 400,000 visits per month.

It’s also on WP Engine.

From all this, I learned a few lessons:

  • Blog hosting is definitely one of those items that should be outsourced, don’t try to manage it yourself. The time of your engineers is too valuable to be spent on blog maintenance. I don’t really consider hosting a blog on AWS to be a real option.
  • WP Engine is a great choice for high-traffic blogs. I’d make the switch to WP Engine if you’re on track to hit 100,000 visits/month or above.
  • WP Engine isn’t perfect, they have a no-man’s land in their pricing structure in the 400,000 to 600,000 visits per month where it actually makes sense to get hit with a bunch of overages instead of upgrading. The jump between the 400,000 visits and the 1 million visits plan is simply too big.
  • I have been through several hosting transitions. While they’re a pain, they’re not nearly as bad as switching out a CRM or marketing automation tool. If you’re worried about the scalability of your host, I’d switch right away and get it done with. I never regretted cleaning up web hosts.
  • If you have a small personal, hobby, or side project site, it’s fine to use a host that’s cheap. But as soon as you start to build a company, get your host switched to something you can rely on for the next 5-10 years. The premium hosts earn their keep by not causing problems. It’s like having an amazing IT manager. If they’re really good at their job, it’ll seem like they never have to do anything. That’s not an accident, they built everything in a way that problems simply don’t occur in the first place. Find a host that never causes problems.

To follow my own advice, it’s about time I switched.

My 7 Rules for A/B Testing That Triple Conversion Rates

September 11, 2015 By Lars Lofgren 16 Comments

I really don’t care how any given A/B test turns out.

That’s right. Not one bit.

But wait, how do I double or triple conversion rates without caring how a test performs?

I actually care about the whole SYSTEM of testing. All the pieces need to fit together just right. If not, you’ll waste a ton of time A/B testing without getting anywhere. This is what happens to most teams.

But if you do it right. If you play by the right rules. And you get all the pieces to fit just right, it’s simply a matter of time before you triple conversions at any step of your funnel.

I set up my system so that the more I play, the more I win. I stack enough wins on top of each other that conversion rates triple. And any given test can fail along the way. I don’t care.

What does my A/B testing strategy look like? It’s pretty simple.

  • Cycle through as many tests as possible to find a couple of 10-40% wins.
  • Stack those wins on top of each other in order to double and triple conversion rates.
  • Avoid launching any false winners that drag conversions back down.

For all this to work, you’ll need to follow 7 very specific rules. Each of them is critical. Skip one and the whole system breaks down. Follow them and you’ll drive your funnel relentlessly up and to the right.

Rule 1: Above all else, the control stands

I look at A/B tests very differently from most people.

Usually, when someone runs a test, they’ll consider each of their variants as equals. The control and the variant are both viable and their goal is to see which one is better.

I can’t stand that approach.

We’re not here for a definitive answer. We’re here to cycle through tests to find a couple of big winners that we can stack on top of each other.

If there’s a 2% difference between the variant and the control, I really don’t care which one is the TRUE winner. Yes, yes, yes, I’d care about a 2% win if I had enough data to hit statistical significance on those tests (more on this in a minute). But unless you’re Facebook or Amazon, you probably don’t have that kind of volume. I’ve worked on multiple sites with more than 1 million visitors/month and it’s exceedingly rare to have enough data hitting a single asset in order to detect those kinds of changes.

In order for this to system to work, you have to approach the variant and control differently. Unless a variant PROVES itself as a clear winner, the control stands. In other words, the control is ALWAYS assumed to be the winner. The burden of proof is on the variant. No changes unless the variant wins.

This ensures that we’re only making positive changes to assets going forward.

Rule 2: Get 2000+ people through the test within 30 days

So you don’t have any traffic? Then don’t A/B test. It’s that simple. Do complete revamps on your assets and then eyeball it.

Remember, we need the A/B testing SYSTEM working together. And we’re playing the long-term. Which means we need a decent volume of data so we can cycle through a bunch of different test ideas. If it takes you 6 months to run a single test, you’ll never be able to run enough tests to find the few winners.

In general, I look for 2000 or more people hitting the asset that I’m testing within 30 days. So if you want to A/B test your homepage, it better get 2000 unique visitors every month. I even prefer 10K-20K people but I’ll get started with as little as 2000/month. Anything less than that and it’s just not worth it.

Rule 3: Always wait at least a week

Inside of a week, data is just too volatile. I’ve had tests with 240% improvements at 99% certainty within 24 hours of launching the test. This is NOT a winner. It always comes crashing down. Best-case scenario, it’s really just a 30-40% win. Worse case, it flip-flops and is actually a 20% decline.

It also lets you get a full weekly cycle worth of data. Visitors don’t always behave the same on weekends as they do during the week. So a solid week’s worth of data gives you a much more consistent sample set.

Here’s an interesting result that I had on one of my tests. Right out of the gate, it looked like I a had 10% lift. After a week of running the test, the test does a COMPLETE flip-flop on me and becomes a 10% loser (at 99% certainty too):

Flip Flip A/B Test

One of my sneaking suspicions is that most of the 250% lift case studies floating around the interwebs are just tests that had extreme results in the first few days. And if they had ran a bit longer, they would have come down to a modest gain. Some of them would even flip-flop into losers. But because people declare winners too soon, they run around on Twitter declaring victory.

Rule 4: Only launch variants at 99% statistical significance

Wait, 99%? What happened to 95%?

If you’ve done an A/B test, you’ve probably run across the recommendation that you should wait until you hit 95% significance. That way, you’ll only pick a false winner 1 out of every 20 tests. And none of us want to pick losers so we typically follow this advice.

You’ve run a bunch of A/B tests. You find a bunch of wins. You’re proud of those wins. You feel a giant, happy A/B testing bubble of pride.

Well, I’m going to pop your A/B testing bubble of pride.

Your results didn’t mean anything. You picked a lot more losers than just 1 in 20. Sorry.

Let’s back up a minute. Where does the 95% statistical significance rule come from?

Dig up any academic or scientific journal that that has quantitative research and you’ll find 95% statistical significance everywhere. It’s the golden standard.

When marketers started running tests, it was a smart move to use this same standard to see if our data actually told us anything. But we forgot a key piece along the way.

See, you can’t just run a measure of statistical confidence on your test after it’s running. You need to determine your sample size first. We do this by deciding the minimal improvement that we want to detect. Something like 5% or 10%. Then we can figure out the statistical power needed and from there, determine our sample size. Confused yet? Yeah, you kind of need to know some statistics to do this stuff. I need to look it up in a textbook each time it comes up.

So what happens if we skip all the fancy shmancy stats stuff and just run tests to 95% confidence without worrying about it? You come up with false positives WAY more frequently than just 1 out of 20 tests.

Here’s an example test I ran. In the first two days, we got a 58.7% increase in conversions at 97.7% confidence:

Chasing Statistical Significance with A/B Tests - 2 Day Results

That’s more than good enough for most marketers. Most people I know would have called it a winner, launched it, and moved on.

Now let’s fast-forward 1 week. That giant 58.7% win? Gone. We’re at a 17.4% with only 92% confidence:


Chasing Statistical Significance with A/B Tests - 1 Week Results

And the results after 4 weeks? Down to a 11.7% win at 95.7% certainty. We’ve gone from a major win to a marginal win in a couple of weeks. It might stabilize here. It might not.

Chasing Statistical Significance with A/B Tests - 4 Week Results

We have tests popping in and out of significance as they collect data. This is why determining your required sample size is so important. You want to make sure that a test doesn’t trick you early on.

But Lars! It still looks like a winner even if it’s a small winner! Shouldn’t we still launch it? There are two problems with launching early:

  1. There’s no guarantee that it would have turned out a winner in the long run. If we had kept running the test, it might have dropped even further. And every once in awhile, it’ll flip-flop on you to become a loser. Then we’ve lost hard-earned wins from previous winners.
  2. We would have vastly over-inflated the expected impact on the business. A 60% win moves mountains. They crush your metrics and eat board decks for breakfast. 11% wins, on the other hand, have a much gentler impact on your growth. They give your metrics a soothing spa package and nudge them a bit in the right direction. Calling that early win at 60% gets the whole team way too excited. Those same hopes and dreams get crushed in the coming weeks when growth is far more modest. Do that too many times and people stop trusting A/B test results. They’ll also take the wrong lessons from it and start focusing on elements that don’t have a real impact on the business.

So what do we do if 95% statistical significance is unreliable?

There’s an easier way to do all this.

While I was at Kissmetrics, I worked with our Growth Engineer, Will Kurt, at the time. He’s a wicked smart guy that runs his own statistics blog now.

We modeled out a bunch of A/B testing strategies over the long term. There’s a blog post that goes over all our data and I also did a webinar on it. How does a super disciplined academic research strategy compare to the fast and lose 95% online marketing strategy? What if we bump it to 99% statistical significance instead?

We discovered that you’d get very similar results over the long term if you just used a 99% statistical significance rule. It’s just as reliable as the academic research strategy without needed to do the heavy stats work for each test. And using 95% statistical significance without a required sample size isn’t as reliable as most people think it is.

The 99% rule is the cornerstone of my A/B testing strategy. I only make changes at 99% statistical significance. Any less than that and I don’t change the control. This reduces the odds of launching false winners to a more manageable level and allows us to stack wins on top of each other without accidentally negating our wins with a bad variant.

Rule 5: If a test drops below a 10% lift, kill it.

Great, we’re now waiting for 99% certainty on all our tests.

Doesn’t that dramatically increase the time it takes to run all our tests? Indeed it does.

Which is why this is my first kill rule.

Again, we care about the whole system here. We’re cycling to find the winners. So we can’t just let a 2-5% test run for 6 months.

What would you rather have?

  • A confirmed 5% winner that took 6 months to reach
  • A 20% winner after cycling through 6-12 tests in that same 6 month period

To hell with that 5% win, give me the 20%!

So the longer we let a test run, the higher that our opportunity costs start to stack up. If we wait too long, we’re forgoing serious wins that we could of found by launching other tests.

If a test drops below a 10% lift, it’s now too small to matter. Kill it. Shut it down and move on to your next test.

What if we have a 8% projected win at 96% certainty? It’s SO close! Or what if we have enough data to find 5% wins quickly?

Then we ask ourselves one very simple question: will this test hit certainty within 30 days? If you’re 2 weeks into the test and close to 99% certainty, let it run a bit longer. I do this myself.

What happens at day 30? That leads us to our next kill rule.

Rule 6: If no winner after 1 month, kill it.

Chasing A/B test wins can be addictive. JUST. ONE. MORE. DAY. OF. DATA.

We’re emotionally invested in our idea. We love the new page that we just launched. And IT’S SO CLOSE TO WINNING. Just let it run a bit longer? PLEEEEEASE?

I get it, each of these tests becomes a personal pet project. And it’s heartbreaking to give up on it.

If you have a test that’s trending towards a win, let it keep going for the moment. But we have to cut ourselves off at some point. The problem is that a many of these “small-win” tests are mirages. First they look like 15% wins. Then 10%. Then 5%. Then 2%. The more data you collect, the more that the variant converges with your control.

CUT YOURSELF OFF. We need a rule that keeps our emotions in check. You gotta do it. Kill that flop of a test and move on to your next idea.

That’s why I have a 30-day kill rule. If the variant doesn’t hit 99% certainty by day 30, we kill it. Even if it’s at 98%, we shut it down on the spot and move on.

Rule 7: Build your next test while waiting for your data

Cycling through tests as fast as we can is the name of the game. We need to keep our testing pipeline STACKED.

There should be absolutely NO downtime between tests. How long does it take you to build a new variant? Starting with the initial idea, how long until it goes live? 2 weeks? 3 weeks? Maybe even an entire month?

If you wait to start on the next test until the current test is finished, you’ve wasted enough data for 1-2 other tests. That’s 1-2 other chances that you could of found that 20% win to stack on top of your other wins.

Do not waste data. Keep those tests running at full speed.

As soon as one test comes down, the next test goes up. Every time.

Yes, you’ll need to get a team in place to dedicate to A/B tests. This is not a trivial amount of work. You’ll be launching A/B tests full time. And your team will need to be moving at full-speed without any barriers.

If it were easy, every one would be doing it.

Follow All 7 A/B Testing Rules to Consistently Drive Conversion Up and to the Right

Follow the system with disciple and it’s a matter of time before you double or triple your conversion rates. The longer that you play, the more likely you’ll win.

Here are all the rules in one spot:

  1. Above all else, the control stands
  2. Get 2000+ people through the test within 30 days
  3. Always wait at least a week
  4. Only launch variants at 99% certainty
  5. If a test drops below a 10% lift, kill it.
  6. If no winner after 1 month, kill it.
  7. Build your next test while waiting for your data

How Live Chat Tools Impact Conversions and Why I Launched a Bad Variant

July 21, 2015 By Lars Lofgren 15 Comments

Do those live chat tools actually help your business? Will they get you more customers by allowing your visitors to chat directly with your team?

Like most tests, you can come up with theories that sound great for both sides.

Pro Live Chat Theory: Having a live chat tool helps people answer questions faster, see the value of your product, and will lead to more signups when people see how willing you are to help them.

Anti Live Chat Theory: It’s one more element on your site that will distract people from your primary CTAs so conversions will drop when you add it to your site.

These aren’t the only theories either, we could come up with dozens on both sides.

But which is it? Do signups go up or down when you put a live chat tool on the marketing site of your SaaS app?

It just so happens I ran this exact test while I was at Kissmetrics.

How We Set Up the Live Chat Tool Test

Before we ran the test, we already had Olark running on our pricing page. The Sales team requested it and we launched without running it through an A/B test. Anecdotally, it seemed helpful. An occasional high-quality lead would come through and it would help our SDR team disqualify poor leads faster.

Around September 2014, the Sales team started pushing to have Olark across our entire marketing site. Since I had taken ownership of signups, our marketing site, and our A/B tests, I pushed back. We weren’t just going to launch it, it needed to go through an A/B test first. I was pro-Olark at this point but wanted to make sure we weren’t cannibalizing our funnel by accident.

We got it slotted for an A/B test in Oct 2014 and decided to test it on 3 core pages of our marketing site: our Features, Customers, and Pricing pages.

Our control didn’t have Olark running at all. This means that we stripped it from our pricing page for the control. Only the variant would have Olark on any pages.

Here’s what our Olark popup looked like during business hours:

Kissmetrics Olark Popup Business Hours

And here it is after-hours:

Kissmetrics Olark Popup After Hours

Looking at the popups now, I wish and I done a once-over with the copy. It’s pretty bland and generic. That might have gotten us better results. At the time, I decided to test whatever Sales wanted since this test was coming from them.

Setting up the A/B test was pretty simple. We used an internal tool to split visitors into variants randomly (this is how we ran most of our A/B tests at Kissmetrics). Half our visitors randomly got Olark, the other half never saw it. Then we tagged each group with Kissmetrics properties and used our own Kissmetrics A/B Test Report to see how conversions changed in our funnel.

So how did the data play out anyway?

Not great.

Our Live Chat A/B Test Results

Here’s what Olark did to our signups:

Live Chat Tool Impact on Signup Conversions

A decrease of 8.59% at 81.38% statistical significance. I can’t say that we have a confirmed loser at this point. I prefer 99% statistical significance for those kinds of claims. But that data is not trending towards a winner.

How about activations? Did it improve signup quality and get more people to install Kissmetrics? That step of the funnel looked even worse:

Live Chat Tool Impact on Activations

A 22.14% decrease on activations at 97.32% statistical significance. Most marketers would declare this as a confirmed loser since we hit the 95% statistical significance threshold. Even if you push for 99% statistical significance, the results are not looking good at this point.

What about customers? Maybe it increased the total number of new customers somehow? I can’t share that data but the test was inconclusive that far down the funnel.

The Decision – Derailed by Internal Politics

So here’s what we know:

  • Olark might decrease signups by a small amount.
  • Olark is probably decreasing Kissmetrics installs.
  • The impact on customer counts is unknown.

Seems like a pretty straightforward decision right? We’re looking at possible hits on signups and activations, then a complete roll of the dice on customers. These aren’t the kind of odds I like to play with. Downside at the top of the funnel with a slim chance of success at the bottom. We should of taken it down right?

Unfortunately, that’s not what happened.

Olark is still live on the Kissmetrics site 9 months after we did the test. If you go to the pricing page, it’s still there:

Kissmetrics Life Chat Tool on Pricing Page

Why wouldn’t we kill a bad test? Why would we let a bad, risky variant live on?

Internal politics.

Here’s the thing: just because you have data doesn’t mean that decisions get made rationally.

I took these test results to one of our Sales directors at the time and said that I was going to take Olark off the site completely. That caused a bit of a firestorm. Alarms got passed up the Sales chain and I found myself in a meeting with the entire Sales leadership.

I wanted Olark gone. Sales was 100% against me.

Live chat is considered a best practice (or at least it was a best practice at one point). It’s a safe choice for any SaaS leadership team. I have no idea HOW it became a best practice considering the data I found but that’s not the point. There’s plenty of best practices that sound great but actually make things worse.

Here’s what the head of Sales told me: “Salesforce uses live chat so it should work for us too.”

But following tactics from industry leaders is the fastest path to mediocrity for a few reasons:

  • They might be testing it themselves to see if it works, you don’t know if it’s still mid-test or a win they’ve decided to keep.
  • They might not have tested it, they could be following best practices themselves and have no idea if it actually helps.
  • They may have gotten bad data but decided to keep it because of internal politics.
  • Even if it does work for them, there’s no guarantee that it’ll work for you. I’ve actually found most tactics to be very situational. There’s a few cases where a tactic helps immensely but most of the time it’s a waste of effort and has no impact.

It’s also difficult to understand how a live chat tool would decrease conversions. Maybe it’s a distraction, maybe not. But when you see good opportunities come in as an SDR rep that help you meet your qualified lead quotas, it’s not easy to separate that anecdotal experience from the data on the entire system.

But none of this mattered. Sales was completely adamant about keeping it.

The ambiguity on customer counts didn’t help either. As long as it was an unknown, arguments could still be made in favor of Olark.

Why didn’t I let the test run longer and get enough data on how it impacted new customer counts? With how close the data was, we would have needed to run the test for several months before getting anywhere close to an answer. Since I had several other tests in my pipeline, I faced serious opportunity costs if I let the test run. Running one test for 3 months means not running 3-4 other tests that have a chance at being major wins.

So I faced a choice. I could have removed Olark if I was stubborn enough. My team had access to the marketing site, Sales didn’t. But standing my ground would start an internal battle between Marketing and Sales. It’d get escalated to our CEO and I’d spend the next couple of weeks arguing in meetings instead of trying to find other wins for the company. Regardless of the final decision, the whole ordeal would fray relationships between the teams. I’d also burn a lot of social capital if I decided to push my decision through. With the decrease in trust, there would be all sorts of long-term costs that would prevent us executing effectively on future projects.

I pushed back and luckily got agreement for not launching it on the Features or Customers pages. But Sales wouldn’t budge on the Pricing page. I chose to let it drop and it lives to this day.

That’s how I launched a variant that decreased conversions.

Should You Use a Live Chat Tool on Your Site?

Could a live chat tool increase the conversions on your site? Possibly. Just because it didn’t work for me doesn’t mean it won’t work for you.

Are there other places that I would place a live chat tool? Maybe a support site or within a product? Certainly. There are plenty of cases where acquisition matters less than helping people as quickly as possible.

Would I use a live chat tool at an early stage startup to collect every possible bit of feedback I could? Regardless of what it did to signups? Most definitely. Any qualitative feedback at this stage is immensely valuable as you iterate to product/market fit. Sacrificing a few signups is well worth the cost of being able to chat will prospects.

If I was trying to increase conversions to signups, activations, and customers, would I launch a live chat tool on a SaaS marketing site without A/B testing it first? Absolutely not. Since this test didn’t go well, I wouldn’t launch a live chat tool without conclusive data proving that it helped conversions.

Olark and the rest of the live chat companies have great products. There’s definitely ways for them to add a ton of value. Getting lots of qualitative feedback at an early stage startup is probably the strongest use case that I see. But if your goal is to increase signups, activations, and customers, I’d be very careful with assuming that a live chat tool will help you.

The 9 Delusions From the Halo Effect

May 31, 2015 By Lars Lofgren 12 Comments

You’re being lied to.

Well, not intentionally.

We’re constantly being pinged with stories of companies that have rocketed to success. Especially in tech, there’s always another $1 billion unicorn around the corner. Uber, Facebook, Airbnb, Slack, Zenefits, Box, Shopify, yadda yadda yadda.

At the same time, rock-solid companies seem to lose their way and crater.

We’re all desperate to know why.

Makes sense. We want to replicate the crazy success and avoid failure.

This is where we all get sucked into the nonsense narratives. They’ll give you false hope on how to produce success.

Here’s a good example: should a company expand into different products, industries, or markets? We’ll answer this question in a minute.

But first, who loves LEGO? I DO. My favorite childhood toy by far. You know how people will buy huge mansions and a dozen sports cars if they ever hit it big? I’ll just buy every LEGO set and fill an entire room with them. These days, they even have a Batwing with Joker steamroller set. How cool is THAT?

LEGO_Batwing_and_Joker_Steamroller

As it turns out, LEGO is a great case study for how delusional we can be about what produces successful companies.

Go check out LEGO’s 2014 annual report. In 2014, their net profit increased by about 15% to over 7 billion Danish krone (DKK). At current exchange rates, that’s about USD$1 billion. Back in 2011, they pulled DKK 4 billion in net profit. So they’ve had similar growth rates since 2011 and have nearly doubled their net profit. Not shabby at all.

Why is Lego doing so well? Management gives the credit to expansions beyond it’s core business, they crushed it with the LEGO Movie and the new line of LEGO sets that released with it. Right on page 5 of the annual report: “new products make up approximately 60% of the total sales each year.” They’ve also seen a lot of growth from the toy market in Asia.

So we’ve answered our question right? If we want to keep growing, we’ll want to expand beyond our core product and market base at a certain point, right?

Well, wait a minute. Our story isn’t that simple.

Go back to 2004 when LEGO nearly went bankrupt. Their COO, Poul Plougmann, got sacked and the business press lambasted the company for poor results. They caught a ton of flack for releasing a LEGO Harry Potter line (apparently, sales slowed when there was a gap between some of the Harry Potter movie releases), experimenting with new toy products, jumping into video games, launching a failed TV show, and trying to go beyond it’s core brand. The consensus was that they should get back to their core base and stop messing around by trying to innovate into new products.

Wait, which is it? In 2014, product expansion from the LEGO Movie helps push the company to new heights. In 2004, the LEGO Harry Potter line, TV shows, and the first attempt at video games nearly pushes it to bankruptcy. During each period, we push narratives and recommendations that contradict themselves. Go back to your core base! Wait, never mind! Expand into new products!

I can’t take credit for this insight or finding the LEGO story. It’s one of the case studies used in The Halo Effect by Phil Rosenzweig.

Rosenzweig shows how narratives are twisted to explain results after they occur. He wrote the original version of his book back in 2007 (there’s a new 2014 copy that you should grab if you haven’t yet). Then after the book is published, LEGO turns around and we start attributing their success to LEGO’s constrained innovation:

  • An interview from Wharton with David Robertson
  • A brief article in the Harvard Business Review (also by David Robertson)
  • An article from the Daily Mail
  • Business Insider’s account of the come-back
  • Another from Forbes

LEGO went back to its base. Innovation trashed the company in 2004 because it was highly unprofitable and expanded beyond its core strengths. Now LEGO has entered another golden era by constraining innovation.

But LEGO just had another huge year by expanding into its first movie. Hard to get further from its product base than that. A decade ago, the LEGO TV show got part of the credit when LEGO struggled. Now the LEGO Movie gets the credit when profits have turned around.

Again, which is it? Innovation? Constrained innovation? Innovation as long as you do these 7 simple steps? Maybe all of the above? Reducing a business to a simple narrative for a blog post or interview is incredibly difficult. And you’ll want to be careful of any source that attempts to do so.

To be fair, David Robertson and Bill Breen wrote a book that dives into the Lego story. I’m hoping they capture the nuance of what went into LEGO’s turn-around. I haven’t read the book myself but it’s on my to-read list.

We’re all exceptionally good at rationalizing any argument. If things go well, we’ll cherry pick some attributes and credit them for the company’s success. Then when things go sideways, we take the same attributes to explain the failure. It all sounds nice and tidy. Too bad it’s a poor reflection of reality.

Phil Rosenzweig calls this habit of ours the Halo Effect. When things go well, we attribute success to whatever attributes stand out at the company. When things go poorly, we attribute bad results to those exact same attributes. It’s one of the 9 delusions that he covers in his book. Let’s go through each of them.

The Halo Effect

The tendency to look at a company’s overall performance and make attributions about its culture, leadership, values, and more. In fact, many things we commonly claim drive company performance are simply attributions based on prior performance.

This is what happened to Lego. In 2004, they’re skewered by the press for trying to expand beyond it’s core business. Now it can’t get enough praise as it drives growth into new markets and product lines.

This happens to companies, teams, and you. When things go well, the quirks get credit for success. When things go poorly, those same quirks get the blame. Our stories search for what’s convenient, not what’s true.

Remember this when you’re in your next team meeting. Someone will float a story for how you got to this point. If it sounds good, the story will spread and your whole organization will start shifting in response to it. And a nonsense story means nonsense changes. There are two things you can do to limit these non-sense stories:

  • Chase causality as often as you can (more on this in a moment). The better your team understands how your systems really work, the closer your stories will be to the truth.
  • Realize that your stories are typically nonsense. It’s your goal to test the validity of that story as fast as you can.

The Delusion of Correlation and Causality

Two things may be correlated, but we may not know which one causes which. Does employee satisfaction lead to high performance? The evidence suggests it’s mainly the other way around — company success has a stronger impact on employee satisfaction.

We’ve all heard the adage “correlation, not causation.” But when you’re about to come up short on a monthly goal, how easy is it to remember correlation versus causation? It’s not. We all break and reach for the closest story we can. Even if we avoid throwing blame around, we still grasp for any story that will guide our way through the madness.

Proving causality is one of the most difficult bars to reach. Very few variables truly impact our goals in a meaningful way. How do we deal with this?

If you only rely on after-the fact data, you never move beyond correlation. Every insight and every bump in a metric is, at best, a correlation. The only way to establish any degree of casualty (and we’re never 100% sure) is to run a controlled experiment of some kind. You’ve got to split your market into two groups and see what happens when you isolate variables.

This is why I push so hard for A/B tests and get really strict with data quality. They allow us to break past the constraints of correlation and gain a glimpse of causation.

If you limit your learning to just correlation, you’ll get crushed by those chasing casualty. They’ll have a much deeper understanding of your environment than you do. You won’t be able to keep up.

And remember, the business myths, stories, best practices, and press rarely look at correlation versus causation. It’s all just correlation.

The Delusion of Single Explanations

Many studies show that a particular factor — strong company culture or customer focus or great leadership — leads to improved performance. But since many of these factors are highly correlated, the effect of each one is usually less than suggested.

Data is messy, markets are messy, customers are messy. The complexities of these systems vastly exceed our ability to understand or adequately measure them. Variables interact and compound in limitless ways.

Whenever someone gives you a nice, tidy explanation for why a business succeeded or failed, assume it’s nonsense.

You can’t depend on a single variable to drive your business forward. World-class teams have mastered countless business functions, everything from employee benefits to market research. The hottest New York Times bestseller may give you a 5 step process on how to conquer the world with nothing other than whatever flavor-of-the-month strategy everyone loves at the moment. But that’s a single variable among many.

Remember that your business moves within an endlessly complex system. Not only are you trying to change this system, you’ll be pushed around by it.

The Delusion of Connecting the Winning Dots

If we pick a number of successful companies and search for what they have in common, we’ll never isolate the reasons for their success, because we have no way of comparing them with less successful companies.

Good ol’ survivorship bias. We can’t just look at winners. We need to find a batch of losers and look for the differences between the two groups. Otherwise, we’re just pulling out commonalities that don’t mean anything.

The tech “unicorn” fad has succumbed to this delusion. Everyone’s looking for patterns among the recent $1 billion tech startups, trying to find the patterns so they can build their own unicorn. But they’re doing many things in exactly the same way as all the startups that blow up or stall out. We just don’t hear about those failures. And if we do, those stories aren’t deconstructed in the same level of detail as the unicorns. So we get a picture of what amazing companies look like but a very limited view on how they differ from their failed counterparts.

Study the failures just as deeply as the successes.

The Delusion of Rigorous Research

If the data aren’t of good quality, it doesn’t matter how much we have gathered or how sophisticated our research methods appear to be.

Rosenzweig takes a shot at Jim Collins with this one. Jim Collins has written several well-renowned books like Good to Great, Built to Last, and Great by Choice. Collins and his team do a ton of historical research to figure out which attributes separate great companies from average companies. As Rosenzweig points out, most of this research is based on flawed business journalism that suffers from the Halo Effect. So the raw data for Collins’ research is horribly flawed which then means his books aren’t as solid as many people think.

Regardless of how you feel about Collins’ books, this is still a critical delusion to remember. It doesn’t really matter how sophisticated you are with modeling, data science, research, or analytics if your data sucks. Fix your data first before trying anything fancy.

This is where I start with every business I work with. Before jumping into growth experiments, A/B testing, or building out channels, I always make sure I can trust my data. Data’s never 100% perfect but there needs to be a low margin of error. The quality of your insights depends on the quality of your data.

The Delusion of Lasting Success

Almost all high-performing companies regress over time. The promise of a blueprint for lasting success is attractive but not realistic.

You will regress to the mean. Crazy success is an outlier by default. Sooner or later, results come back down to typical averages.

Mutual funds prove this point perfectly. In any 2 year period, you can find mutual funds that crush the S&P 500. Wait another 5-10 years and those same mutual funds have fallen back to earth. Your company is in the same boat. If things go crazy well, it’s a matter of time before you come back down. Take advantage of your outlier while it lasts.

This is particularly dangerous with individual or team performance. Is it really talent or are you just an outlier? Sooner or later, you’ll have some campaign or project that takes off. Well… if you launch enough stuff, you’re bound to get lucky. The real question is how long can you sustain it? Can you repeat that success? And since we all regress to the mean eventually, how can you use you current success to get through the eventual decline?

All channels decline, all products decline, all markets decline, all businesses decline. You will decline. What are you doing now to plan for it?

The Delusion of Absolute Performance

Company performance is relative, not absolute. A company can improve and fall further behind its rivals at the same time.

You’re graded on a curve whether you like it or not. Even if you’re improving, customers won’t care if your competitor is improving faster than you are. You’ll need to stay ahead of the pack no matter how fast the pack is already moving.

Otherwise, it’s a matter of time before you’ve lost the market. Your success isn’t determined in isolation. Just because you did a great job doesn’t mean you’ll achieve greatness.

This stems from a basic psychological principle: as humans, we do a terrible job at perceiving absolute value. This applies to pricing, customer service, product value, and every trait around us. In order to gauge how good or bad something is, we always look for something to compare it to. It really doesn’t matter if you cut prices by 50% if your competitor found a way to cut them by 60%. You’re still considered too expensive.

Your work will always be judged in relation to the work of your peers.

The Delusion of the Wrong End of the Stick

It may be true that successful companies often pursued a highly focused strategy, but that doesn’t mean highly focused strategies often lead to success.

Another shot at Good to Great with this one.

One of the core concepts in Good to Great is hedgehog versus fox companies. Hedgehog companies focus relentlessly on one thing. Foxes dart from idea to idea. According to Collins, amazing companies are all hedgehogs with ruthless focus.

But we don’t have the full picture of the risk/reward trade-off. It’s a lot like gambling or investing. You COULD throw your entire life savings into a single stock (hedgehog) and if that stock takes off… you’ll make a fortune. But if it doesn’t? You’ve lost everything. Investors that diversify (foxes) won’t reap extreme gains but they also won’t expose themselves to extreme loses.

Companies might work very similarly. Yes, hugely successful companies could tend to be hedgehogs. They made big bets and won. But that might not be the best strategy for your company if it means taking on substantial amounts of risk. Most importantly, we can’t say for sure what the risk/reward trade-offs look like without a larger data set of companies. Even if great companies out-perform average companies when they’re hedgehogs, there could be just as many hedgehog companies that weren’t so lucky.

The Delusion of Organizational Physics

Company performance doesn’t obey immutable laws of nature and can’t be predicted with the accuracy of science — despite our desire for certainty and order.

Physics is beautiful and elegant. Business is not.

No matter what you do, you cannot remove uncertainty in business like you can with physics. Books, consultants, blog posts, and pithy tweets will all try to convince you that a simple step-by-step process will take your business to glory. As much as we’d all like to have simple rules to follow, that’s not how this game is played. Business cannot be reduced to fundamental laws or rules.

And sometimes, the outcome is completely outside your control. Even if you do everything right, follow all the right strategies, use the best frameworks, hire the best people, and build something amazing, the whole business can still go sideways on you. We can’t remove uncertainty from the system. All we can do is stack the odds in our favor. Fundamentally, business and careers are endless games of probability.

Recap Time! The 9 Delusions From the Halo Effect

Here are all 9 delusions in a nice list for you:

  • The Halo Effect
  • The Delusion of Correlation and Causality
  • The Delusion of Single Explanations
  • The Delusion of Connecting the Winning Dots
  • The Delusion of Rigorous Research
  • The Delusion of Lasting Success
  • The Delusion of Absolute Performance
  • The Delusion of the Wrong End of the Stick
  • The Delusion of Organizational Physics

Don’t get sucked into the delusional narratives of success. Embrace the uncertainty.

How to Read 70 Books a Year And Catapult Your Career

April 9, 2015 By Lars Lofgren 13 Comments

I’ve been known to read a few books.

For the last few years, I’ve actually been keeping track of how many books I read. Here’s my annual totals:

  • 2011 = 39
  • 2012 = 68
  • 2013 = 50
  • 2014 = 70

With the 9 books that I’ve read so far in 2015, that brings my total to 236. This doesn’t even include all the books I read, just the one’s that relate to business. I even finished all 100 books on Josh Kaufman’s Personal MBA reading list along the way.

Before we get into how I read this much (and how you can too)… why even bother? Reading takes a ton of time, especially if you want to read 70 books a year.

I’ll be straight with you: I would not be where I am today if I didn’t read as much as I do.

I originally started as a contractor at KISSmetrics, working on blog posts and support help videos. Two years later, I was the Head of Marketing and reported directly to the CEO.

As soon as I figured out how to perform at a high level with my current level of responsibilities, I asked to take on bigger challenges. Basically, I kept asking to get thrown into the deep-end. Then when I learned to tread water, I found the next storm to get thrown into.

  • Once I got comfortable with blog posts and product videos, I jumped into webinars. We had only done one webinar before I started. My webinar system went on to become our second-largest source of leads.
  • Our email system was in shambles. So I rebuilt the entire thing on a marketing automation tool and integrated it into the workflows of our Sales team. This set the whole foundation for being about to scale our lead growth.
  • Then I took responsibility for our lead counts and conversion rates. I put together our A/B testing strategy and our lead gen campaigns. A year later, we had tripled our conversion rates and quadrupled our monthly lead count.
  • To top it off, I jumped into the Head of Marketing role. I started attending board and leadership meetings, set the Marketing strategy and budget for 2015, and doubled the size of the team. We kept hitting all of our monthly goals like clockwork.

Every 6-9 months, I was taking a huge jump in responbilities.

The only reason I survived that kind of pace is that I had done an immense amount of reading. Need to crank out webinars? I just finished 4 books on webinars and how to give great presentations, let’s do this! Now driving monthly lead counts and improving conversions? Sure thing, I’ve read every conversion optimization book on Amazon. Time to double the size of the Marketing team? Good thing I just finished 5 of the top books on hiring. When my responsibilities increased, I had already spent time studying from the best in the field.

Reading won’t turn you into a world-class expert. You’ll need years of in-the-trenches experience to be truly world-class. But reading at this volume will make sure you can hit the ground running. You’ll become a solid practitioner in weeks instead of years. That’s how you survive being thrown into the deep-end every couple of months.

I’ve got 8 principles that I use to keep my reading volume up. If you follow them, you’ll be able to take on huge jumps in responsibility and accelerate your career.

Anchor 30-60 Minutes of Daily Reading

This is the key to making all this work.

By carving out just 30-60 minutes every day, you’ll be surprised how many books you’ll start to go through.

If you’ve done any research on how to change habits (The Power of Habit and Self-Directed Behavior will get you up to speed), you know that building a new habit works best when you anchor it against some other trigger. When you hit that trigger, you’ll be prompted to do your new habit. Before long, it’ll be an automatic response that you don’t even think about.

My favorite anchor for this? Reading in bed as I fall to sleep. If I’ve had a crazy day and I’m completely exhausted, I might only get through a page or two before passing out. But if I have plenty of energy left over and have a great book in my hands, I’ll easily crush a third of a book in a couple of hours before finally falling to sleep. Most likely, 30 minutes and a chapter or two is more than enough to put me to sleep.

One important caveat here: don’t regularly sacrifice sleep for reading. Some people have a really hard time falling asleep while reading non-fiction. Too many ideas start jumping into your head and you get amped thinking about all the new possibilities. If this happens to you regularly, try some fiction before bed and find another time during the day to anchor your business reading.

Find your ideal trigger for 30-60 minutes of daily reading. If one doesn’t work after a week or so, try another. Here’s a few other options:

  • Right after breakfast before the rest of your day starts.
  • During your lunch break.
  • On the train or bus during your commute.
  • An audiobook while your drive to work.
  • First thing you do when you get home from work.
  • After brushing your teeth and before going to bed.
  • In bed as you fall asleep.

Buy Your Next Book Before You Finish Your Current Book

As soon as you get close to finishing your current book, make sure you’ve got another one sitting around for you to dive into next. There shouldn’t be any breaks in your new reading habit. Finishing a book and then binging on Netflix for a few weeks will really slow you down.

When you start to get to the end of your current book, grab one on Amazon or run by your local book store. It’s also a nice little incentive to keep you going and wrap up the current book. Finishing sooner also means starting the next one sooner.

Since I’m a compulsive reader, I’ve moved most of my reading to a Kindle which makes this a non-issue. As soon as I’m finished with my current book, I download the next book that I want instantly. Takes me 2 minutes tops.

What about buying a bunch at a time so you have a to-read pile?

If that works for you, go for it. Several years ago, I developed a habit of buying $200-300 worth of books at a time. Then I’d only read the first couple before I bought another batch, leaving me with a pile that I never seemed to get around to. So now I force myself to finish my current book before buying the next one.

Take Your Book With You Everywhere

That’s right, be that nerd that never goes anywhere without a book. Now when you have a few spare minutes waiting for your next meeting, the bus, or your friends to show up, you’ve got a book at your side. Just like how that daily 30-60 minutes adds up over a year, 10 minutes a few times throughout the day also adds up real fast.

I’ve always got my Kindle on me. Whenever I have some spare time, I grab it and start reading. It’s much more fulfilling that scrolling through my Facebook feed. Would you rather level up one of your business skills or see which of your friends just shared a BuzzFeed article? I don’t even have Facebook installed on my phone, I’m too busy reading.

Flights and commutes are also perfect for this. Don’t crank on that spreadsheet or deck, just relax and get some reading done. With nothing else to distract you, you’ll crush books. Everyone else frets about their flight getting delayed, I just use the extra time to finish a few more chapters. If my other work is important enough, I’ll find time to get it done. But by allocating travel time to reading, I’m consistently investing in my long-term instead of just running on the short-term project treadmill.

Keep a To-Read List

I have hundreds of books in my backlog, more than I could ever read. You should start your own to-read list which includes every book you’ve ever wanted to read. This way, you’ll never have to wonder what you should read next. As soon as you start to finish your current book, you’ve got a huge list of books you’ve already decided to read.

Some people will add books to their Amazon wishlist, I just use Evernote. Here’s a snippet of what mine looks like:

Lars Lofgren To-Read List

That goes on for hundreds, maybe thousands of books. When I’m looking for my next book, I scroll down and start popping titles into Amazon to jog my memory on what each book’s about. When I find the right one, I hit the purchase button.

But where do you get ideas for books in the first place?

There’s a number of people that I respect immensely in my field. If any of them recommends a book, I add it straight to my list. Even if it’s a topic I’m lukewarm on, any recommendation from them goes to the top.

I’ll also check out the reviews on Amazon whenever I see a book mentioned elsewhere. It might come up in some random blog post, a retweet scrolling through my feed, or a New York Times article. And if it looks solid, then it gets added.

Most importantly, pay close attention to books that get mentioned repeatedly in your field. For marketing, you’ll see Influence, Permission Marketing, and The 22 Immutable Laws of Marketing all the time. And deservedly so, they’re classics of the field. As soon as you start to get a vibe that a book is a classic in a field that you want to build mastery in, add it to the list.

Your list will start small but will quickly expand. Before you know it, you’ll have a longer to-read list than you could ever hope to finish.

Alternate Between Depth and Entertainment

Text books aren’t just for college kids. Some of the most valuable reads I’ve discovered were pretty hefty tombs. Even if you love to read, these things take some serious effort to get through. Diffusion of Innovations and The Social Animal are great examples. Full of insights and value but a slog to get through. There’s no quirky anecdotes, no fun tangents, and no narrative to speed things along. You’ll earn each and every page.

You’ll want to tackle these pillars of learning but don’t overdo it. After you’ve finished one, make sure to sprinkle in a few lighter reads. Something from Michael Lewis or Seth Godin will do the trick.

I also alternate between topics I love and topics I’m not ecstatic about but are still critical to rounding out my knowledge. For example, I’m not super passionate about data visualization but Show Me the Numbers gives a great foundation for building out graphs and tables in your work. Occasionally pick up a book that is critical to building out your expertise regardless of how excited you are to read it. Once you finish, jump back into some topics that you’re more excited about.

Keep switching up the book types and topics so that reading doesn’t feel too much like work. Don’t let yourself get burned out on it. A fiction or non-business book goes a long way to giving you some relief after conquering some musty tomb.

Cut Your Cable

I’ve never had cable and I never will.

Don’t get me wrong, I love movies and solid TV. But I demand control of my time and my schedule when it comes to entertainment. Having an all-you-can-eat stream of B-rate reality TV is just too much of a time suck. You’ll progress so much faster if you replace that time with reading.

Get a Netflix subscription, set aside $50 to buy anything else you want on Amazon or iTunes, and you’ll have more than enough to watch. You’ll even spend less than most cable subscriptions. And with HBO Now, there’s no reason to have cable.

Take the 2-3 hours that most people watch each day and replace it with reading. You’ll easily finish a book every week.

Don’t Speed Read

Could I double the number of books that I read each year by getting good at speed reading? Absolutely. Do I want to? Nope.

You see, I’m not reading for sheer volume. That misses the point. The goal here is mental preparation so I’m as qualified as possible even when I walk into completely novel situations. So my main priority is retention. The better I can incorporate what I’ve learned into my mental models, the more I can relay on instinct in any number of crazy situations.

This is why I read at a speed that’s about half of my max. A 50% pace means I can use the other 50% of my energy to think deeply about each concept as it comes up in the reading. Connecting the reading to my experiences and everything else I’ve already learned helps me retain each new principle.

Volume certainly helps as you continue to grow your own capabilities. But retention shouldn’t be scarified in the name of hitting some vanity metric on how many books you read. They’ll only help you if you actually integrate them into your own mental models.

Wrap Up

If you want to accelerate your career or your business, you’ll need to constantly be preparing for the next major challenge. Voracious reading is one of my trade secrets to stack the odds in my favor when a new opportunity comes along.

And here’s my 8 tips for reading 70 books a year:

  • Anchor 30-60 minutes of daily reading
  • Always have an unfinished book on hand
  • Buy your next book before you finish your current book
  • Take your book with you everywhere
  • Keep a to-read list
  • Alternate between depth and entertainment
  • Cut your cable
  • Don’t speed read

If you want to keep tabs on what I’m reading, I list every business book that I finish here.

How to Keep Riding the Slack Rocketship Without Blowing It Up

March 7, 2015 By Lars Lofgren 8 Comments

Growth graphs like this don’t come around too often:

slack-1year-Feb12-2015-dau

I’ve gotta hand it to Slack, they’re playing the PR game pretty well too. Headline after headline on Techcrunch and Techmeme keep telling us all how crazy the growth is over there. They’ve already built the brand to go with it.

But here’s the thing about rocketships: they either go further than you ever thought possible or they blow up in your face.

It all comes down to momentum. Keep it up and you’ll quickly dominate your market.

But once you start to lose momentum, the rocketship rips itself apart. Competitors catch up, market opportunities slip away, talent starts to leave, and growth stalls. It’s a nasty feedback loop that’ll do irrecoverable damage. Once that rocketship lifts off, you either keep accelerating growth or momentum slips away as you come crashing back down. Grow or die.

There’s no room for mistakes. But when driving growth, there’s 2 forces that will consistently try to bring you crashing back down.

1) The Counter-Intuitive Nature of Growth

At KISSmetrics, I launched a bunch of tests that didn’t make a single bit of sense when you first looked at them. And many of them were our biggest winners.

Here’s a good example. Spend any time learning about conversion optimization and you’ll come across advice telling you to simplify your funnel. Get rid of steps, get rid of form fields, make it easier. Makes sense right? Less effort means more people get to the end. In some cases, this is exactly what happens.

In other cases, you’ll grow faster by adding extra steps. Yup, make it harder to get to the end and more people finish the funnel. This is exactly what happened during one of our homepage tests. We added an entire extra step to our signup flow and instantly bumped our conversions to signups by 59.4%.

Here’s the control:

Home Page Ouath Test Control

Here’s the variant:

Home Page Oauth Test Variant

In this case, the variant dropped people straight into a Google Oauth flow. But we didn’t get rid of our signup form since we still needed people to fill out the lead info for our Sales team.

Number of steps on the control:

  1. Homepage
  2. First part of signup form
  3. Second part of signup form (appeared as soon as you finished the first 3 fields)

Number of steps on the variant:

  1. Homepage
  2. Google account select
  3. Google oauth verification
  4. Signup form completion

You could say the minimalist design on that variant helped give us a winner which is true. But we saw this “add extra steps to get more conversions” across multiple tests. Works like magic on homepages, sign up forms, and webinar registrations. It’s one of my go-to growth hacks at this point.

Counter-intuitive results come up so often that it’s pretty difficult to call winners before you launch a test. At KISSmetrics, we had some of the best conversion experts in the industry like Hiten Shah and Neil Patel. Even with this world-class talent, we STILL only found winners 33% of the time. That’s right, the majority of our tests FAILED.

We ran tests that I would have put good money on. And guess what? They didn’t move our metrics even a smidge. The majority of our tests made absolutely zero impact on our growth.

It takes a LOT of testing to find wins like this. So accelerating growth isn’t a straight-forward process. You’ll run into plenty of dead-ends and rabbit holes as you learn which levers truly drive growth for your company.

2) You’ll Get Blind-Sided by False Positives

Fair enough, growth is counter-intuitive. Let’s just A/B test a bunch of stuff, wait for 95% statistical significance, and then launch the winners. Problem solved!

Not so fast…

That 95% statistical significance that you’ve placed so much faith in? It’s got a major flaw.

A/B tests will lead you astray if you’re not careful. In fact, they’re a lot riskier than most people realize and are riddled with false positives. Unless you do it right, your conversions will start bouncing up and down as bad variants get launched accidentally.

Too much variance in the system and too many false positives means you’re putting a magical growth rate at serious risk. Slack wants to get as much risk off the table while still chasing big wins. And the normal approach to statistical significance doesn’t cut it. 95% statistical significance launches too many false positives that will drag down conversions and slow momentum.

Let’s take a step back. 95% statistical significance comes from scientific research and is widely accepted as the standard for determining whether or not two different data sets are just random noise. But here’s what gets missed: 95% statistical significance only works if you’ve done several other key steps ahead of time. First you need to determine the minimum percentage improvement that you want to detect. Like a 10% change for example. THEN you need to calculate the sample size that you need for your experiment. The results don’t mean anything until you hit that minimum sample size.

Want to know how many acquisition folks calculate the sample size based on the minimum difference in conversions that they want to detect? Zero. I’ve never heard of a single person doing this. Let me know if you do, I’ll give you a gold star.

But I don’t really blame anyone for not doing all this extra work. It’s a pain. There’s already a ton of steps needed in order to launch any given A/B test. Hypothesis prioritization, estimating impact on your funnel, copy, wireframes, design, front-end and back-end engineering, tracking, making the final call, and documentation. No one’s particularly excited about adding a bunch of hard-core statical steps to the workflow. This also bumps the required sample sizes for conversions into the thousands. Probably not a major problem for Slack at this point but it will dramatically slow the number of tests that they can launch. Finding those big wins is a quantity game. If you want higher conversions and faster viral loops, it’s all about finding ways to run more tests.

When you’re in Slack’s position, the absolute last thing you want to do is expose yourself to any unnecessary variance in your funnel and viral loops. Every single change needs to accelerate the funnel like clockwork. There’s too much at stake if any momentum is lost at this point. So is there another option other than doing all that heavy duty stats work for each A/B test? Yes there is.

The Key to Keeping Rocketships Flying

Right now, the team at Slack needs to be focusing on one thing: how not to lose.

Co-founder Stewart Butterfield mentioned that he’s not sure where the growth is coming from. This is a dangerous spot to be in. As they start to dive into their funnel, there’s a serious risk of launching bad winners from false positives. They’ll need every last bit of momentum if they want to avoid plateauing early.

As it turns out, there is a growth strategy that takes these A/B testing risks off the table. It’s disciplined, it’s methodical, and it finds the big wins without exposing you to the normal volatility of A/B testing. I used it at KISSmetrics to grow our monthly signups by over 267% in one year.

Here’s the key: bump your A/B decision requirement to 99% statistical significance. Don’t launch a variant unless you hit 99%. If you’re at 98.9% or less, keep the control. And run everything you can through an A/B test.

Dead serious, the control reigns unless you hit 99% statistical significance. You’ll be able to keep chasing counter-intuitive big wins while protecting your momentum.

At KISSmetrics, we actually did a bunch of Monte Carlo simulations to compare different A/B Testing strategies over time.

I’ve posted the results from 3 different strategies below. Basically, the more area under the curve means the more conversions you earned. Each dot represents a test that looked like a winner. You’ll notice that many dots actually bring the conversions down. This comes from false positives and not being rigorous enough from your A/B testing.

Here’s what you get if you use the scientific researcher strategy:

conversion-rate-vs-observations-600

Not much variance in this system. Winners are almost always real winners.

Here’s your regular sloppy 95% statistical significance strategy that makes changes as early as 500 people in the test:

the-impatient-marketer-600

Conversions bounce around quite a bit. False wins come up often which means that if you sit on particular variation for long, it will drag those conversions down and slow growth. There goes your momentum.

Now let’s look at the 99% strategy that waits for at least 2000 people in the test for a decent sample size:

the-realist-conversion-rates-600

Still a chance to pick up false winners here but a lot less variance than 95%. Let’s quantify all 3 strategies real quick by calculating the area under the curve. Then we’ll be able to compare them instead of just eye-balling the simulations.

  • Statistical researcher = 67759
  • 95% statistical significance = 60532
  • 99% statistical significance = 67896

Bottom line: the 99% strategy performs just as well as the scientific researcher and a lot better than the sloppy 95%. It’s also easy enough for any team to implement without having to do the extra stats work.

The 99% rule is my main A/B testing rule but here are all of them:

  • Control stands unless the variant hits a lift at 99% statistical significance.
  • Run the test for at least a week to get a full business cycle.
  • Get 2,000 people through the test so you have at least a halfway decent sample size.
  • If the test looks like a loser or has an expected lift of less than 10%, kill it and move on to the next test. Only chase single digit wins if you have a funnel with crazy volume.

I used these rules to double and triple multiple steps of the KISSmetrics funnel. They reduce the risk of damaging the funnel to the bare minimum, accelerate the learning of your team, and uncover the biggest wins. That’s how you keep your growth momentum.

Embedding A/B Tests Into the Slack Culture

I can give you the rules for how to run a growth program. But you know what? It won’t get you very far unless you instill A/B tests into the fabric of the company. The Slack team needs to pulse with A/B testing. Even the recruiters and support folks need to get excited about this stuff.

This is actually where I failed at KISSmetrics. Our Growth and Marketing teams understood A/B testing and our entire philosophy behind it. We cranked day in and day out. It was our magic sauce.

But the rest of the team? Did Sales or Support ever get it? Nope. Which meant I spent too much time fighting for the methodology instead of working on new tests. If I had spent more time bringing other teams into the fold from the beginning, who knows how much further we could have gone.

If I was at Slack, one of my main priorities would be to instill A/B testing into every single person at the company. Here’s a few ideas on how I’d pull that off:

  • Before each test, I’d show the entire company what we’re about to test. Then have everyone vote or bet on the winners. Get the whole company to put some skin in the game. Everyone will get a feel for how to accelerate growth consistently.
  • Weekly A/B testing review. Make sure at least one person from each team is there. Go through all the active A/B tests, current results, which one’s finished, final decisions, and what you learned from them. The real magic of A/B testing comes from what you’re learning on each test so spread these lessons far and wide.
  • Do monthly A/B testing growth talks internally. Include the rules for testing, why you A/B test, the biggest wins, and have people predict old tests so they get a feel for how hard it is to predict ahead of time. Get all new hires into these. Very few people have been through the A/B test grind, you need to get everyone up to speed quickly.
  • Monthly brainstorm and review of all the current testing ideas in the pipe. Invite the whole company to these things. Always remember how hard it is to predict the big winners ahead of time, you want testing ideas coming at you from as many sources as possible.

Keep Driving The Momentum of That Slack Rocketship

I’m really hoping the team at Slack has already found ways to avoid all the pitfalls above. They’ve got something magical and it would be a shame to lose it.

To the women and gents at Slack:

  • Follow the data.
  • Get the launch tempo as high as possible for growth, you’ll need to run through an awful lots of ideas before you find the ones that truly make a difference.
  • Only make changes at 99% statistical significance.
  • Spread the A/B testing Koolaid far and wide.
  • Don’t settle. You’ve got the magic, do something amazing with it.
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 8
  • Go to Next Page »

Primary Sidebar

Don’t miss any of my new essays.

Most Popular Posts

  • The Three Engines of Growth – with Eric Ries
  • The 35 Headline Formulas of John Caples
  • My 7 Rules for A/B Testing That Triple Conversion Rates
  • The 9 Delusions From the Halo Effect
  • How Live Chat Tools Impact Conversions and Why I Launched a Bad Variant
  • Sorry Eric Ries, There’s Only Two Engines of Growth
  • Two Mistakes I Made on the Engines of Growth
  • How to Keep Riding the Slack Rocketship Without Blowing It Up
  • What is Permission Marketing?
  • How to Read 70 Books a Year And Catapult Your Career

Copyright 2019