• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Lars Lofgren

Building Growth Teams

  • About Me
  • What I’ve Read
  • Want Help?

Analytics

The 9 Delusions From the Halo Effect

May 31, 2015 By Lars Lofgren 12 Comments

You’re being lied to.

Well, not intentionally.

We’re constantly being pinged with stories of companies that have rocketed to success. Especially in tech, there’s always another $1 billion unicorn around the corner. Uber, Facebook, Airbnb, Slack, Zenefits, Box, Shopify, yadda yadda yadda.

At the same time, rock-solid companies seem to lose their way and crater.

We’re all desperate to know why.

Makes sense. We want to replicate the crazy success and avoid failure.

This is where we all get sucked into the nonsense narratives. They’ll give you false hope on how to produce success.

Here’s a good example: should a company expand into different products, industries, or markets? We’ll answer this question in a minute.

But first, who loves LEGO? I DO. My favorite childhood toy by far. You know how people will buy huge mansions and a dozen sports cars if they ever hit it big? I’ll just buy every LEGO set and fill an entire room with them. These days, they even have a Batwing with Joker steamroller set. How cool is THAT?

LEGO_Batwing_and_Joker_Steamroller

As it turns out, LEGO is a great case study for how delusional we can be about what produces successful companies.

Go check out LEGO’s 2014 annual report. In 2014, their net profit increased by about 15% to over 7 billion Danish krone (DKK). At current exchange rates, that’s about USD$1 billion. Back in 2011, they pulled DKK 4 billion in net profit. So they’ve had similar growth rates since 2011 and have nearly doubled their net profit. Not shabby at all.

Why is Lego doing so well? Management gives the credit to expansions beyond it’s core business, they crushed it with the LEGO Movie and the new line of LEGO sets that released with it. Right on page 5 of the annual report: “new products make up approximately 60% of the total sales each year.” They’ve also seen a lot of growth from the toy market in Asia.

So we’ve answered our question right? If we want to keep growing, we’ll want to expand beyond our core product and market base at a certain point, right?

Well, wait a minute. Our story isn’t that simple.

Go back to 2004 when LEGO nearly went bankrupt. Their COO, Poul Plougmann, got sacked and the business press lambasted the company for poor results. They caught a ton of flack for releasing a LEGO Harry Potter line (apparently, sales slowed when there was a gap between some of the Harry Potter movie releases), experimenting with new toy products, jumping into video games, launching a failed TV show, and trying to go beyond it’s core brand. The consensus was that they should get back to their core base and stop messing around by trying to innovate into new products.

Wait, which is it? In 2014, product expansion from the LEGO Movie helps push the company to new heights. In 2004, the LEGO Harry Potter line, TV shows, and the first attempt at video games nearly pushes it to bankruptcy. During each period, we push narratives and recommendations that contradict themselves. Go back to your core base! Wait, never mind! Expand into new products!

I can’t take credit for this insight or finding the LEGO story. It’s one of the case studies used in The Halo Effect by Phil Rosenzweig.

Rosenzweig shows how narratives are twisted to explain results after they occur. He wrote the original version of his book back in 2007 (there’s a new 2014 copy that you should grab if you haven’t yet). Then after the book is published, LEGO turns around and we start attributing their success to LEGO’s constrained innovation:

  • An interview from Wharton with David Robertson
  • A brief article in the Harvard Business Review (also by David Robertson)
  • An article from the Daily Mail
  • Business Insider’s account of the come-back
  • Another from Forbes

LEGO went back to its base. Innovation trashed the company in 2004 because it was highly unprofitable and expanded beyond its core strengths. Now LEGO has entered another golden era by constraining innovation.

But LEGO just had another huge year by expanding into its first movie. Hard to get further from its product base than that. A decade ago, the LEGO TV show got part of the credit when LEGO struggled. Now the LEGO Movie gets the credit when profits have turned around.

Again, which is it? Innovation? Constrained innovation? Innovation as long as you do these 7 simple steps? Maybe all of the above? Reducing a business to a simple narrative for a blog post or interview is incredibly difficult. And you’ll want to be careful of any source that attempts to do so.

To be fair, David Robertson and Bill Breen wrote a book that dives into the Lego story. I’m hoping they capture the nuance of what went into LEGO’s turn-around. I haven’t read the book myself but it’s on my to-read list.

We’re all exceptionally good at rationalizing any argument. If things go well, we’ll cherry pick some attributes and credit them for the company’s success. Then when things go sideways, we take the same attributes to explain the failure. It all sounds nice and tidy. Too bad it’s a poor reflection of reality.

Phil Rosenzweig calls this habit of ours the Halo Effect. When things go well, we attribute success to whatever attributes stand out at the company. When things go poorly, we attribute bad results to those exact same attributes. It’s one of the 9 delusions that he covers in his book. Let’s go through each of them.

The Halo Effect

The tendency to look at a company’s overall performance and make attributions about its culture, leadership, values, and more. In fact, many things we commonly claim drive company performance are simply attributions based on prior performance.

This is what happened to Lego. In 2004, they’re skewered by the press for trying to expand beyond it’s core business. Now it can’t get enough praise as it drives growth into new markets and product lines.

This happens to companies, teams, and you. When things go well, the quirks get credit for success. When things go poorly, those same quirks get the blame. Our stories search for what’s convenient, not what’s true.

Remember this when you’re in your next team meeting. Someone will float a story for how you got to this point. If it sounds good, the story will spread and your whole organization will start shifting in response to it. And a nonsense story means nonsense changes. There are two things you can do to limit these non-sense stories:

  • Chase causality as often as you can (more on this in a moment). The better your team understands how your systems really work, the closer your stories will be to the truth.
  • Realize that your stories are typically nonsense. It’s your goal to test the validity of that story as fast as you can.

The Delusion of Correlation and Causality

Two things may be correlated, but we may not know which one causes which. Does employee satisfaction lead to high performance? The evidence suggests it’s mainly the other way around — company success has a stronger impact on employee satisfaction.

We’ve all heard the adage “correlation, not causation.” But when you’re about to come up short on a monthly goal, how easy is it to remember correlation versus causation? It’s not. We all break and reach for the closest story we can. Even if we avoid throwing blame around, we still grasp for any story that will guide our way through the madness.

Proving causality is one of the most difficult bars to reach. Very few variables truly impact our goals in a meaningful way. How do we deal with this?

If you only rely on after-the fact data, you never move beyond correlation. Every insight and every bump in a metric is, at best, a correlation. The only way to establish any degree of casualty (and we’re never 100% sure) is to run a controlled experiment of some kind. You’ve got to split your market into two groups and see what happens when you isolate variables.

This is why I push so hard for A/B tests and get really strict with data quality. They allow us to break past the constraints of correlation and gain a glimpse of causation.

If you limit your learning to just correlation, you’ll get crushed by those chasing casualty. They’ll have a much deeper understanding of your environment than you do. You won’t be able to keep up.

And remember, the business myths, stories, best practices, and press rarely look at correlation versus causation. It’s all just correlation.

The Delusion of Single Explanations

Many studies show that a particular factor — strong company culture or customer focus or great leadership — leads to improved performance. But since many of these factors are highly correlated, the effect of each one is usually less than suggested.

Data is messy, markets are messy, customers are messy. The complexities of these systems vastly exceed our ability to understand or adequately measure them. Variables interact and compound in limitless ways.

Whenever someone gives you a nice, tidy explanation for why a business succeeded or failed, assume it’s nonsense.

You can’t depend on a single variable to drive your business forward. World-class teams have mastered countless business functions, everything from employee benefits to market research. The hottest New York Times bestseller may give you a 5 step process on how to conquer the world with nothing other than whatever flavor-of-the-month strategy everyone loves at the moment. But that’s a single variable among many.

Remember that your business moves within an endlessly complex system. Not only are you trying to change this system, you’ll be pushed around by it.

The Delusion of Connecting the Winning Dots

If we pick a number of successful companies and search for what they have in common, we’ll never isolate the reasons for their success, because we have no way of comparing them with less successful companies.

Good ol’ survivorship bias. We can’t just look at winners. We need to find a batch of losers and look for the differences between the two groups. Otherwise, we’re just pulling out commonalities that don’t mean anything.

The tech “unicorn” fad has succumbed to this delusion. Everyone’s looking for patterns among the recent $1 billion tech startups, trying to find the patterns so they can build their own unicorn. But they’re doing many things in exactly the same way as all the startups that blow up or stall out. We just don’t hear about those failures. And if we do, those stories aren’t deconstructed in the same level of detail as the unicorns. So we get a picture of what amazing companies look like but a very limited view on how they differ from their failed counterparts.

Study the failures just as deeply as the successes.

The Delusion of Rigorous Research

If the data aren’t of good quality, it doesn’t matter how much we have gathered or how sophisticated our research methods appear to be.

Rosenzweig takes a shot at Jim Collins with this one. Jim Collins has written several well-renowned books like Good to Great, Built to Last, and Great by Choice. Collins and his team do a ton of historical research to figure out which attributes separate great companies from average companies. As Rosenzweig points out, most of this research is based on flawed business journalism that suffers from the Halo Effect. So the raw data for Collins’ research is horribly flawed which then means his books aren’t as solid as many people think.

Regardless of how you feel about Collins’ books, this is still a critical delusion to remember. It doesn’t really matter how sophisticated you are with modeling, data science, research, or analytics if your data sucks. Fix your data first before trying anything fancy.

This is where I start with every business I work with. Before jumping into growth experiments, A/B testing, or building out channels, I always make sure I can trust my data. Data’s never 100% perfect but there needs to be a low margin of error. The quality of your insights depends on the quality of your data.

The Delusion of Lasting Success

Almost all high-performing companies regress over time. The promise of a blueprint for lasting success is attractive but not realistic.

You will regress to the mean. Crazy success is an outlier by default. Sooner or later, results come back down to typical averages.

Mutual funds prove this point perfectly. In any 2 year period, you can find mutual funds that crush the S&P 500. Wait another 5-10 years and those same mutual funds have fallen back to earth. Your company is in the same boat. If things go crazy well, it’s a matter of time before you come back down. Take advantage of your outlier while it lasts.

This is particularly dangerous with individual or team performance. Is it really talent or are you just an outlier? Sooner or later, you’ll have some campaign or project that takes off. Well… if you launch enough stuff, you’re bound to get lucky. The real question is how long can you sustain it? Can you repeat that success? And since we all regress to the mean eventually, how can you use you current success to get through the eventual decline?

All channels decline, all products decline, all markets decline, all businesses decline. You will decline. What are you doing now to plan for it?

The Delusion of Absolute Performance

Company performance is relative, not absolute. A company can improve and fall further behind its rivals at the same time.

You’re graded on a curve whether you like it or not. Even if you’re improving, customers won’t care if your competitor is improving faster than you are. You’ll need to stay ahead of the pack no matter how fast the pack is already moving.

Otherwise, it’s a matter of time before you’ve lost the market. Your success isn’t determined in isolation. Just because you did a great job doesn’t mean you’ll achieve greatness.

This stems from a basic psychological principle: as humans, we do a terrible job at perceiving absolute value. This applies to pricing, customer service, product value, and every trait around us. In order to gauge how good or bad something is, we always look for something to compare it to. It really doesn’t matter if you cut prices by 50% if your competitor found a way to cut them by 60%. You’re still considered too expensive.

Your work will always be judged in relation to the work of your peers.

The Delusion of the Wrong End of the Stick

It may be true that successful companies often pursued a highly focused strategy, but that doesn’t mean highly focused strategies often lead to success.

Another shot at Good to Great with this one.

One of the core concepts in Good to Great is hedgehog versus fox companies. Hedgehog companies focus relentlessly on one thing. Foxes dart from idea to idea. According to Collins, amazing companies are all hedgehogs with ruthless focus.

But we don’t have the full picture of the risk/reward trade-off. It’s a lot like gambling or investing. You COULD throw your entire life savings into a single stock (hedgehog) and if that stock takes off… you’ll make a fortune. But if it doesn’t? You’ve lost everything. Investors that diversify (foxes) won’t reap extreme gains but they also won’t expose themselves to extreme loses.

Companies might work very similarly. Yes, hugely successful companies could tend to be hedgehogs. They made big bets and won. But that might not be the best strategy for your company if it means taking on substantial amounts of risk. Most importantly, we can’t say for sure what the risk/reward trade-offs look like without a larger data set of companies. Even if great companies out-perform average companies when they’re hedgehogs, there could be just as many hedgehog companies that weren’t so lucky.

The Delusion of Organizational Physics

Company performance doesn’t obey immutable laws of nature and can’t be predicted with the accuracy of science — despite our desire for certainty and order.

Physics is beautiful and elegant. Business is not.

No matter what you do, you cannot remove uncertainty in business like you can with physics. Books, consultants, blog posts, and pithy tweets will all try to convince you that a simple step-by-step process will take your business to glory. As much as we’d all like to have simple rules to follow, that’s not how this game is played. Business cannot be reduced to fundamental laws or rules.

And sometimes, the outcome is completely outside your control. Even if you do everything right, follow all the right strategies, use the best frameworks, hire the best people, and build something amazing, the whole business can still go sideways on you. We can’t remove uncertainty from the system. All we can do is stack the odds in our favor. Fundamentally, business and careers are endless games of probability.

Recap Time! The 9 Delusions From the Halo Effect

Here are all 9 delusions in a nice list for you:

  • The Halo Effect
  • The Delusion of Correlation and Causality
  • The Delusion of Single Explanations
  • The Delusion of Connecting the Winning Dots
  • The Delusion of Rigorous Research
  • The Delusion of Lasting Success
  • The Delusion of Absolute Performance
  • The Delusion of the Wrong End of the Stick
  • The Delusion of Organizational Physics

Don’t get sucked into the delusional narratives of success. Embrace the uncertainty.

How to Test When You Don’t Have Any Data

August 30, 2012 By Lars Lofgren Leave a Comment

We all know how important testing is. Some of our marketing works, some of it doesn’t. Some customer love us, others don’t. Testing helps us cut through that endless fog of uncertainty.

If you spend any time in the startup or online marketing circles, you’ll hear how wonderful A/B testing is. Let’s say you want to test your home page. Throw up two different versions (usually the current version and your new one), compare the results, and BAM, you know which is better.

Sounds easy right?

Well it isn’t. Here’s the thing, we need traffic to make those tests valid. If you only have 10 people go through your test, the numbers are just too small. You won’t be able to learn anything. At that volume, our results are just random coincidence. We mine as well just flip a coin. So we need dozens, hundreds, and preferably thousands of people to run our test on.

Well that’s just swell. What do we do if we don’t have hundreds of customers or visitors? What if we have like 2?

Even when you only have a few customers or visitors, you still need to be testing. And you’ll need to take a completely different approach. And it just so happens that we use this approach every day at KISSmetrics (for slightly different reasons).

Instead of obsessing over data, we need to reach out and talk to people directly.

Let’s run through the nitty gritty with a real example.

How I Get Test Results From Small Groups of People

Right now, I’m helping redesign the People Search report in KISSmetrics. The report helps you find different groups of people within your data. And we’re revamping it to provide better insights and making it easier to use.

If we followed the conventional approach, we would build a second version of the report, enable it for a small portion of our customers, and compare engagement rates between the two groups.

But we’ve gone a different route.

Before we drafted a single new design, we started talking to our customers. In fact, we scheduled meetings with 5 of them to get a DEEP understanding of how they use the report. We learned why they use the report, what frustrates them, and how the report helps them build a better business.

And we didn’t stop there.

We started putting together some mock-ups (really rough drawings) of ideas we had on how to make it better. Then we scheduled some more meetings. This time, we showed the drawings alongside the current version. First, we asked them to walk us through the current report and how they use it. Then we showed them the drawings of the new version and asked them what they liked and didn’t like. We ran our own little A/B test right on Skype.

The feedback we got was amazing. It saved us hours of work and kept us from building something that would have been much worse. Right away, we learned that one of our ideas was straight up terrible. Not kinda sorta bad. Completely terrible.

With a few meetings and a drawing, we ran our A/B test with limited data.

In our case, we’re looking for speed. Will 5 people give us a complete picture of what all of our customers want? No. But it will allow us to move a LOT faster. If we wanted to do a legit A/B test, we’d have to make a bunch of guesses on what we think are improvements, build and design the whole thing, then launch it. This takes weeks of effort.

Instead, we can get quick feedback on our overall direction and change course without having to build the whole thing.

We also gain a depth of understanding that a simple A/B test would never give us. By talking through the report with someone, we understand what they’re biggest frustrations are and how they incorporate it into their business. Then we’re able to build a new version that goes far beyond the first.

And this process works just as well when you don’t have enough data to run a full A/B test.

What if you only have 2 customers? Reach out to them and talk through your different ideas. What if you don’t have ANY customers? Get in touch with the types of people you think would be your customers. By listening for the passion in people’s voices, you’ll quickly know if you’re on track.

When multiple people start getting excited about the same idea, that’s when it’s time to turn that idea into a reality.

5 Best Practices When Talking to Customers and Testing Ideas

1. Ask open-ended questions without leading the customer.

Don’t show someone a drawing of a new design or product, explain the entire thing to them, and then ask for feedback. You won’t be there to explain the concept to every customer. You want that initial reaction to better understand what customers will be going through when they see it for the first time on their own.

2. This is not a sales call.

Don’t sell anything, not even yourself. In fact, you should be listening a lot more than you’re talking. Spend your energy focusing on understanding the perspective of who you’re talking to and ask them great questions.

3. Take notes.

Once you start doing several of these, it’s going to be really hard to keep track of who said what. Taking notes will make sure you’re able to keep everything straight. Feel free to record the meeting if you don’t want to take notes at the same time. You can even have your recordings transcribed for you and save some time.

4. Look for passion.

You’re on the right track when people start to get excited over your ideas. If they start asking “Where can I get this?” or “When will it be available?”, you’re definitely on to something. Listen for the passion in their voice, it’ll tell you what’s most important.

5. Look for trends.

We still want to be careful with not getting enough data. Remember that one idea I had to completely throw out because no one liked it? Well, one guy liked it. If I had only talked to him, I would have recommended changes that the rest of our customer base would have HATED. Avoid making changes on a single comment. Look for things that come up over and over again. That’s where you want to focus.

The Bonus Points of Awesome

As you start doing these interviews on a regular basis, you’ll enjoy another serious benefit. You’ll start building relationship with all the customers you’re talking to. They’ll go from happy customers to loyal evangelists without you having to do any extra work. Simply by reaching out to people and asking them for help, they’ll become more committed to your cause. They won’t be able to stop talking about how awesome you are when they see how their feedback helps you build a better product.

This alone is worth the effort of all these interviews. Combined with learning which of your ideas will work the best, this is one of the easiest ways to grow your business.

The Difference Between Web Analytics and Customer Analytics

June 13, 2012 By Lars Lofgren 5 Comments

When we see the word “analytics,” this is the image that pops into our heads:

Google Analytics Visitors Overview

 

Good ol’ Google Analytics. Data like this measures every detail of our website. It’s not just “analytics,” it’s “web analytics.”

Web analytics does a fantastic job at measuring your website. Pageviews, bounce rates, time on site, it’s all there.

But how much of this is actually useful? If you’re a major internet marketing geek like myself, you will find data that will help you grow your business. The thing is, it’s not easy to find the data that matters. When we focus on our web site, we have to piece together all sorts of random information in an attempt to figure out what our customers are doing. This is doable if you:

  • Spend way too much time on the internet
  • Have experience with behavioral psychology, web design, and marketing
  • Don’t scream in terror at the sight of endless rows of data
  • Can manhandle your analytics to give you the data you need
  • Have the time to wade through everything until you find a piece of data worth acting on

For most people, several of these criteria are deal-breakers. So what are our options? Should we just ignore web analytics?

There’s a better way to do all this.

Using Customer Analytics

With web analytics, all our data is organized around our website. We have lists of pageviews, landing pages, and traffic sources. But there’s no way to organize the data around our customers.

The most critical piece of all this data (our customers) is completely missing. Well, that’s no good. I don’t know about you but I’d much rather spend time analyzing data on actual people instead of my website. I’d be able to stop spending so much time on data sources of low value like pageviews and get much better insights by looking at what real people are doing.

By dropping the data on our web site and focusing on our customers exclusively, the data becomes much easier to manage. We’re not dealing with abstract data on a landing page, we have data on actual people. We know where each person came from, what they did on our site, how they became a customer, what they did as a customer, all of it.

Data on real people is easier to understand and use.

This is called customer analytics. Instead of focusing on your site, it collects data on your customers. In fact, every bit of data is tied to a real person.

What Customer Analytics Looks Like

This all sounds great in theory, but what does this sort of thing look like?

Check out this report:

KISSmetrics Person Report

This is data on an individual customer. Now, I’ve blurred out a line at the top which is where the email address is displayed. We know how much they’ve spent with us (in total), where they came from, and how frequently they visit our site.

There’s also data like this:

KISSmetrics Timeline

 

This timeline shows EVERY action a single person has had with our business. We know what feature’s they’ve used, what actions they’ve taken, and we know how many times they did each action on each day. And those fancy lines connect the steps of funnels that we’re tracking. We have a complete picture for how a customer behaves.

Data from reports like these tells the story of our customers. We know what matters most to them so we can work to build a business they love. And if we run into data we can’t make sense of, we have their email address. Within a few minutes, we can reach out them directly and get feedback on how we’re doing.

The only product I know that provides data like this is KISSmetrics.

Full Disclosure Time: I’m the Marketing Analyst at KISSmetrics which provides businesses like yours with customer analytics. This is where I’ve pulled these reports from. If you’re interested in learning more about customer analytics, reach out to me and I’ll help however I can.

Other Customer Analytics Options?

Ever other instance of customer analytics that I’ve seen was built in-house. In other words, the company paid an engineer (or an entire team) full-time to collect data from multiple sources and organize it around their customers. Considering you’ll pay at least $100,000 in salary to a single engineer, you’ll easily spend hundreds of thousands of dollars over the course of several years to get a system like this fully functioning.

If you’re looking for a plug-and-play solution, I don’t know of any other options besides KISSmetrics. Definitely let me know if you’ve seen another example of customer analytics, I’d love to check it out.

The 3 Profiles that Every Google Analytics Account Needs

September 7, 2011 By Lars Lofgren 28 Comments

The vast majority of Google Analytics accounts are set up on a single profile. All the data, testing, and filters impact a single source of data. This can cause searious problems as your business grows. Before we dive into the profiles that your account should have, we need to cover how Google Analytics produces profiles.

Google Analytics does not store raw data about your traffic. By the time you see data in your Google Analytics account, it has gone through several steps:

  • Your Google Analytics Tracking Code sends visitor data to Google Analytics’ Servers.
  • Google Analytics’ servers compile data on your site.
  • About every 3 hours, Google Analytics processes your data using your settings. Filters are applied, conversions counted, and site search is processed.
  • Each day, Google Analytics dumps the data from the previous day.
  • Once all the data is processed, it’s stored in a database where you can access it through your Google Analytics account.

The key part of this process is that the data you have access to is not raw data, it’s data that’s been processed using settings you’ve applied to your profile. Once the data has been processed, there’s no going back.

This is why a new goal cannot be applied to historical traffic. Google Analytics processes goals only one day at a time.

In other words, the settings we apply to our Google Analytics profiles will change the data permanently for each day that they’re applied. If you have a typo in an exclude filter, you could irreversibly corrupt a portion of your data.

To avoid such pitfalls, there’s several profiles that you’ll need when using Google Analytics.

Raw Data Profile

This profile does exactly what you think it would, it houses raw data. There’s no goals, no site search, no filters, no custom reports, nothing. The purpose of this profile is to provide a fail safe in case all your other profiles become corrupted. If you lose all your other data, you’ll still be able to pull reports from this profile. The data won’t be polished but you’ll be thankful it’s there if you ever need it.

Simply set up a profile, name it Raw Data, and then never touch it. The sooner you do this, the more data you’ll have backed up if something goes terribly wrong.

Test Profile

This is where you’ll test all your settings to avoid any accidental data corruptions.

Before applying goals, filters, or anything else to your other profiles, test them here first. You can watch the data for several days and see how your reports are impacted. Once you’re sure that everything is working precisely as it should, you can apply the same setting to the profile the setting was designed for.

Master Profile

When I’m combing through reports looking for useful insights and evaluating the success of my campaigns, I’m looking at my Master Profile. It’s the default profile on all my accounts. It’s also the most critical. This is where all my filters are applied, my goals and custom reports are set up, and site search is enabled.

Since I’m using it to do all my analysis, I want to make sure that any new settings aren’t going to skew my data and lead me to the wrong business decisions.

Once you have your Master Profile set up with whatever settings you prefer, you’ll generally want to leave it alone. If you’re going to apply new settings or tweak existing ones, run them on the Test Profile first.

Custom Profiles

Create other profiles to do whatever your heart desires. Here’s some common custom profiles to give you some ideas:

  • Access-Based Profiles that limit people in your organization to only certain pieces of your Google Analytics data. Just filter out everything you don’t want them to see.
  • Source-Based Profiles include traffic from a single source. This will allow you to have direct access to how that traffic behaves. This is especially useful if you’re spending a lot of time and energy on a single source for traffic (Facebook, Twitter, Youtube, etc) and want to evaluate the success of those efforts.
  • Location-Based Profiles let you focus on a single location. If you’re expanding into an overseas market or focusing on a single location, you’ll want to easily see how that traffic behaves.

Aside from Accessed-Based Profiles, all of this data will be available in your Master Profile. You’ll just have to drill down to the right report or create custom reports. Segmenting everything into a new profile simply makes it faster to access what you want. If you’re going to be spending months (or years) evaluating a specific portion of your data, it’ll be worth it to set up a profile. Otherwise, just grab it from your Master Profile.

What other profiles do you use?

Primary Sidebar

Don’t miss any of my new essays.

Most Popular Posts

  • The Three Engines of Growth – with Eric Ries
  • My 7 Rules for A/B Testing That Triple Conversion Rates
  • The 35 Headline Formulas of John Caples
  • The 9 Delusions From the Halo Effect
  • How Live Chat Tools Impact Conversions and Why I Launched a Bad Variant
  • How to Keep Riding the Slack Rocketship Without Blowing It Up
  • Two Mistakes I Made on the Engines of Growth
  • Sorry Eric Ries, There’s Only Two Engines of Growth
  • What is Permission Marketing?
  • How to Read 70 Books a Year And Catapult Your Career

Copyright 2019