Ironically, today’s thought was sparked by BuzzFeed’s insistence on delighting users over boosting numbers.

(Really!)

In an article at Poynter this week, Dan Oshinsky discussed a 2012 email newsletter that went out from BuzzFeed that had an extremely high open rate. Its title?

“You’re fired.”

You might see how this would’ve gotten a massive open rate. You might also see why recipients replied with a fair amount of disdain for the tactic.

Wisely, BuzzFeed listened to the feedback and actively chooses to find less hilariously awful ways (arguably) to get the numbers they want.

Better Metrics

Choosing to be a little more humane wasn’t the only thing that changed at BuzzFeed as a result of that ridiculous email: it led to a revision of how success was measured.

BuzzFeed now de-emphasizes open rate as a “silver bullet” metric: It looks instead at click rate, the number of links readers click and how long each reader spends with the email. The newsletter team also monitors growth rates for every newsletter subscriber list and compares how many people are reading them on mobile and desktop platforms.

Email open rates are a great metric for ensuring you’re not cutting yourself off at the knees with a bad subject line. It’s not that they shouldn’t be used, it’s that it doesn’t tell you nearly as much about engagement. It’s often more of a way to know you’re setting the stage well for the killer content inside your email. From there, matching metrics up with your goals for the email is the key to knowing if you’re getting where you want to go. If your newsletter is a collection of links or seeks to drive recipients to a call-to-action, clicks are certainly paramount! If it’s got readable content in it, knowing how long the reader spends will tell you whether things are getting read or opened and archived quickly.

Web analytics information presents a similar trap: big numbers look good, but they often don’t actually matter unless they show that visitors are doing the right things.

For instance, many times a high bounce rate is perfectly reasonable! Consider these examples:

  • Sometimes, a call to action on a landing page will link a user to a new domain where they can log in to your product. That’s a bounce, but they’re going right where they should.
  • Your contact page often contains everything someone needs when they’re Googling for “company x phone number”. They’ll leave immediately, because they got what they came for.
  • Blog posts often have extremely high bounce rates, even if they’re great posts! A reader can get value out of it and still not want to click around further on the website.

In each case, you need better metrics to determine success:

  • For the landing page linking to a new domain, attach an event so you can see success in your reports.
  • On your contact page, use a tool to know how customers get your phone number. If you use AdWords, consider the WordPress plugin I helped write for CallRail.
  • Design for further engagement on your blogs to see if you can increase time spent on the site, but also take a look at returning visitors over time. Offering a newsletter subscription is another great option.

Wrong Metrics in Best Practices

The wrong metrics won’t just throw off your ability to know success and improve, it can also teach you the exact wrong things to do.

After one of my talks at Dreamforce this year, an audience member asked about some sales tactics he’d heard from successful salespeople and wanted to know how he could apply them to email marketing (which is also designed to drive sales). The concepts seemed a little iffy, but the salespeople had seen success. Why not try it?

Here’s why: the audience member’s business runs off renewals, while salespeople are often only incentivized to make the sale. There’s no way to know if these tactics would actually pay off in the long run, because the salespeople didn’t know (or care) if customers stuck around. Overselling a product or feature, for instance, can benefit someone who gets paid only off of the conversion, but the business takes the hit when the customer ends up disappointed in the experience.

So, as I told the audience member: there’s no way to know if those tactics would work, because yours goals don’t align. He needs to look for tips from people who are also incentivized to keep customers happy over the course of renewal cycles.

Trusting the wrong person’s tactics is an epidemic in content. Have you seen how many blogs have co-opted the clickbait/Upworthy style of creating titles? If your sole metric of success is ad revenue off of single page views, that may very well be a useful tactic for you. If you’re hoping to build readership, you probably ought to crank back the eye roll-inducing titles and work on having great content that’s worth reading.

It all comes down to ensuring that the metrics you’re trying to affect accurately reflect your business goals.

I’m wincing as I conclude with: maybe you should learn from BuzzFeed and take a fresh look at how you define success.

November 14, 2014 — Leave a comment — In Blog, Content, Data, UX

We’ve got so much data to look at these days—so many papers and reports. Endless analytics data.

Infinite ways to screw up what any of it actually means.

The Leap

A pretty entertaining article on PBR has been making the rounds this week. In it, the author gives a personal anecdote about her apparently not cool husband buying PBR, which has a reputation as a “hipster” (don’t get me started on this word) beer. Also, apparently, her husband’s purchase conclusively inaugurates PBR’s entrance into the mainstream, and, thus, spells the end of the brand.

The unfounded leap is made after sharing a study that shows a “direct correlation between a brand’s perceived autonomy and a consumer’s level of counterculturalism.” She mentions that PBR markets in a very non-traditional manner (which is true, most certainly here in Atlanta) and then ties the two together:

Of course, the fact that hipsters spent the past decade drinking “sub-premium” beer isn’t their fault. It’s the fault of savvy PBR’s marketers—and also of human nature.

She goes on to essentially conclude that non-traditional marketing created a connection with people who identify with counterculturalism and therefore caused them to integrate it into their group identity (the human nature part she mentioned). It’s even supported with heat maps of beer availability in trendy neighborhoods.

Let’s pause. This is a great article because it’s a compelling story. It’s not in a scientific journal; it’s in an outdoor magazine’s blog.

It’s incredibly insightful, but it’s biased. I want to show you how because we’re incredibly prone to doing this within our marketing tactics. We’re suckers for a good story.

Detection & Confirmation Bias

Let’s go back to those heat maps presented as evidence that PBR is popular with a vague group of individuals because of non-traditional marketing.

One of the heat maps is shown further down the page, accompanied by an explanation for why the attempt to link PBR to hipsters falls short:

While the two most obvious pictures of hipsterdom seem to check out with the PBR hypothesis, going to other cities brings us other results.

Detection bias shows up when hipsters from New York and San Francisco are followed more closely (as in many articles reporting on the so-called hipsters) than similarly described people elsewhere. Studying one particular portion of the culture in which what you’re looking for is most likely to occur will yield results more like what you’re expecting—which has already begun making the transition into confirmation bias. You’ll tend to favor information that supports your existing conclusion.

Even that very article (the one with the heat maps) goes on to explore the idea that the link is not so much with a particular culture so much as it is a particular economic status:

The places we do see PBR light up are around universities. Cambridge and Boston have tons of schools, and larger ones like Harvard and MIT bring droves of college kids that can’t spend a boatload on drinks. That raises the next natural question: if you keep a tight budget, are you wise to flock to PBR to keep costs low?

Now we’re getting somewhere. PBR is cheap. Miller High Life and Tecate are also cheap beers, which my thoughts immediately jumped to reading the PBR article, because I see those other beers being prevalent in the same culture just as much, if not more sometimes.

The data is already skewed because it’s got the wrong variables. Once the running theory is that PBR is prevalent where cost is a concern, comparing it against other cheap beers would give far more compelling data regarding its impact on cultural identification.

This happens frequently in marketing efforts; the clearest example is advertising. We know with certainty that it’s virtually impossible to measure returns on advertising and our efforts to attribute causality result in a drastic overestimation of effectiveness.

How do we deal with our very real inability to draw meaningful conclusions?

Activity Bias

Awareness of certain types of bias helps cancel it out. Even though there are more mathematically correct means by which we can do so, we’d do well just avoid being wholly duped and asking the right questions.

Let’s go back to cheap beer. A more in-depth analysis would have looked at the availability and purchase of PBR and other similarly priced beers in areas targeted by PBR’s marketing efforts. People are already buying cheap beer; the question is whether they’re choosing PBR or not.

That’s an admittedly abstract but useful example of something called activity bias. It’s a problem, because we tend to correlate activity with a marketing effort without comparing it to activity by people not exposed to the marketing effort.

You can understand this easily with a graph from Nielsen Norman’s exploration:

Users visiting a site before and after watching a promotional video for that site as well as a control group that watched an unrelated video. Chart replotted after data published in Lewis et al. (2011).

If it weren’t for that pesky orange line, this graph could be used to quickly prove that a promotional video resulted in increased traffic. A keen eye would see that the graph actually started trending up before the video even began, but comparing it to traffic from those who didn’t see the video is the final nail in the coffin.

There’s plenty of discussion about why this activity bias exists, but why is less important than your knowledge that it’s there. It starts small.

Why look for more data when the data you have makes it look like you made the right decision? Why rock the boat when the boss is happy? Why turn down a larger budget, now that you’re showing you deserved it?

You may have personal reasons not to counteract this bias, but know that it’s leading to bad analysis which can, in turn, cause companies to spend massive amounts of money in a less efficient manner.

You need to remember that attribution models are imperfect and may need to change frequently. eBay experimented with branded keyword advertisements and found that the true benefit of search ads comes from exposure to customers who don’t know or remember your brand. Jakob Nielsen adds:

Activity bias comes back to haunt marketing managers who run simplistic analyses of “attributed sales” to advertising, assuming that sales are caused by whatever happened to be the user’s last click. Many users who both click ads and make purchases would have done the latter even if they hadn’t seen an ad. A controlled experiment is the only way to discover an ad’s true impact.

Want to actually understand marketing analytics? You’ll need to get your proverbial lab coat on and run true experiments.

If you don’t, your data analysis will ironically cause you to put money in the wrong places. And guess who loves irony?

August 8, 2014 — Leave a comment — In Advertising, Analytics, Blog, Data, Marketing