Aug 022017
 

August 3, 2017

In my last blog post I pointed out that I wish I had put BANs (big-ass numbers) in the Churn dashboard featured in chapter 24 of the book (see http://www.datarevelations.com/iterate.html.)

I had a similar experience this week when I revisited the Net Promoter Score dashboard from Chapter 17.  I’ve been reading Don Norman’s book The Design of Everyday Things and have been thinking about how to apply many of its principles to dashboard design.

On thing you can do to help users decode your work is to ditch the legend and add a color key to your dashboard title.

Here’s the Net Promoter Score dashboard as we present it in the book.  Notice the color legend towards the bottom right corner.

Figure 1 -- Net Promoter Score dashboard from The Big Book of Dashboards.

Figure 1 — Net Promoter Score dashboard from The Big Book of Dashboards.

Why did I place the legend out of the natural “flow” of how people would look at the dashboard? Why not just make the color coding part of the dashboard title, as shown below?

Figure 2 -- Making the color legend part of the title. 

Figure 2 — Making the color legend part of the title.

I’m not losing sleep over this as this is probably a dashboard that people will be looking at on a regular basis; that is, once they know what “blue” means they won’t  need to look at the legend.

But…

Every user will have his / her “first time” with a dashboard, so I recommend that wherever possible make the legend part of the “flow.” For example, instead of the legend being an appendage, off to the side of the dashboard…

Figure 3 -- Color legend as an appendage.

Figure 3 — Color legend as an appendage.

Consider making the color legend part of the title, as shown here.

Figure 4 -- Color coding integrated into the title.

Figure 4 — Color coding integrated into the title.

 

Jan 112016
 

Overview

I spend a lot of time with survey data and much of this data revolves around gauging people’s sentiments and tendencies using either a Likert Scale or a Net Promoter Score (NPS) type of thing.

Examples

Here’s an example of gauging sentiment using a 5-point Likert scale.

Indicate how satisfied you are with the following:

00_Grid1

Here’s an example of measuring tendencies, using a 4-point Likert scale.

How often do you use the following learning modalities?

00_Grid2

So, what’s a good way to visualize responses to these types of questions?

Over the past ten years I’ve spent thousands of hours working on the best ways to show how opinion and tendencies skew one way or another.  I have found that in most cases a divergent stacked bar chart helps me (and more importantly, my clients) best see what’s going on with the survey responses.

In this blog posts we’ll

  • See an example of a divergent stacked bar chart (also called a staggered stacked bar chart)
  • Work through a data visualization improvement process
  • Show how to visualize different scales (e.g., NPS, Top 3/Bottom 3, 5-point Likert, etc.)
  • Show sentiment and tendencies over time
  • Present a dashboard that will allow you to experiment with different visualization approaches

Note: for step-by-step instructions on how to build a Likert-scale divergent stacked bar chart in Tableau, click here.

Divergent Stacked Bar vs. 100% Stacked Bar

Readers of my newsletter and folks visiting the web site may have seen my redesign of a New York Times infographic that showed the tendencies of politicians to lie or tell the truth.  Here’s the 100% Stacked Bar chart that appeared in the New York Times.

Figure 1 -- 100% stacked bar chart.

Figure 1 — 100% stacked bar chart.

Here’s the redesign using a divergent stacked bar chart.

Figure 2 -- Divergent stacked bar chart.

Figure 2 — Divergent stacked bar chart.

With both the 100% stacked bar chart and the divergent stacked bar charts the overall length of the bars is the same, but with the divergent approach the bars are shifted left or right to show which way a candidate leans. I, and others I’ve polled, find that shifting the bars makes the chart easier to understand.

How We Got Here — Likert Scale Improvement Process

Consider the table below that shows the results from a fictitious poll on the use of various learning modalities.

Figure 3 -- Table with survey results.

Figure 3 — Survey results in a table.

I can’t glean anything meaningful from this.

What about a bar chart?

Figure 4 -- Likert scale questions using a bar chart. Yikes.

Figure 4 — Likert scale questions using a bar chart. Yikes.

Wow, that’s really bad.

What about a 100% stacked bar chart?

Figure 5 -- 100% stacked bar chart using default colors.

Figure 5 — 100% stacked bar chart using default colors.

Okay, that’s better, but it’s still pretty bad as Tableau’s default colors do nothing to help us see tendencies that are adjacent. That is, “Often” and “Sometimes” should have similar colors, as should “Rarely” and “Never.”

So, let’s try using better colors…

(…and don’t even think about using red and green.)

Figure 6 -- 100% stacked bar chart using a more appropriate color scheme.

Figure 6 — 100% stacked bar chart using a more appropriate color scheme.

This is certainly an improvement, but the modalities are listed alphabetically and not by how often they’re used. Let’s see what happens when we sort the bars.

Figure 7 -- Sorted 100% stacked bar chart with good colors.

Figure 7 — Sorted 100% stacked bar chart with good colors.

It’s taken us several tries, but it’s now easier to see which modalities are more popular.

But we can do better.

Here’s the same data rendered as a divergent stacked bar chart.

Figure 8 -- Sorted divergent stacked bar chart with good colors.

Figure 8 — Sorted divergent stacked bar chart with good colors.

Of course, we can also look take a coarser view and just compare Sometimes/Often with Rarely/Never, as shown here.

Figure 9 – Divergent stacked bar chart with only two levels of sentiment.

Figure 9 – Divergent stacked bar chart with only two levels of sentiment.

I find that the divergent approach “speaks” to me and it resonates with my colleagues and clients.

Experiments using Different Scales

A while back Helen Lindsey was kind enough to send me some data that contained responses to some Net Promoter Score questions.  Specifically, folks were asked to rate companies/products on a 0 to 10 or 1 to 10 scale.

Figure 10 -- The classic Net Promoter Score (NPS) question

Figure 10 — The classic Net Promoter Score (NPS) question

We compute NPS by subtracting the percentage of folks that are promoters (i.e., people who responded with a 9 or a 10), subtracting the percentage of folks that are detractors (i.e., people who responded with a 0 through 6) and multiplying by 100.

But sometimes my clients have questions that are on a 10 or 11-point scale but instead want to compute the percentage of folks that responded with one of the top three boxes minus the percentage of folks that responded with the bottom three boxes.

I realized that the Lindsey data set could provide a type of “sandbox” where we could experiment with different sentiment scales including NPS, Top 3 minus Bottom 3, 5-point Likert, 3-point Likert, and 2-point Likert.

Let’s look at the results of some of these experiments.

NPS

Here are two ways we can visualize NPS data.  The first shows the percentages of people that fall into the three categories.

Figure 11 -- NPS showing percentages

Figure 11 — NPS showing percentages

Here’s the same view, but with the NPS score superimposed over the divergent stacked bars.

Figure 12 -- NPS with score superimposed

Figure 12 — NPS with score superimposed

NPS over Time

It turns out that divergent stacked bars are great at showing NPS trends over time.  Here’s a view using percentages.

Figure 13 -- Divergent stacked bar showing NPS over time with percentages

Figure 13 — Divergent stacked bar showing NPS over time with percentages

Here’s the same view but with the score superimposed.

Figure 14 -- Divergent stacked bar showing NPS over time with scores

Figure 14 — Divergent stacked bar showing NPS over time with scores

Note – for some other interesting treatments of showing sentiment over time, see Joe Mako’s visualization on banker honesty.

Net = Top 3 minus Bottom 3

Let’s take the same data but divide it into the following buckets:

  • Positive = Top 3 Boxes
  • Neutral = Middle 4 Boxes
  • Negative = Bottom 3 Boxes

Here are the associated visualizations.

Figure 15 -- Top 3 / Bottom 3 showing with percentages

Figure 15 — Top 3/Bottom 3 showing with percentages

Figure 16 -- Top 3 / Bottom 3 with scores

Figure 16 — Top 3/Bottom 3 with scores

Five, Three, and Two-Point Likert Scale Renderings

Let’s suppose that instead of asking a questions on a 1 through 10 scale we instead asked folks to select one of the following five responses:

  • Strongly disagree
  • Disagree
  • Neutral
  • Agree
  • Strongly agree

Here’s the same NPS data but rendered using a five-point Likert scale.

Figure 17 -- Divergent stacked bar chart showing all responses

Figure 17 — Divergent stacked bar chart showing all responses

And here’s the same data, but divided into positive, neutral, and negative sentiments (3-point Likert).

Figure 18 -- Divergent stacked bar showing positive, neutral, and negative

Figure 18 — Divergent stacked bar showing positive, neutral, and negative

Finally, here’s the same data, but only showing positive and negative sentiments (2-point Likert).

Figure 19 -- Divergent stacked bar showing just positive and negative

Figure 19 — Divergent stacked bar showing just positive and negative

Try it yourself

Below you will find a dashboard that allows you to explore different combinations of the 1 to 10 scale.

I strongly recommend you do NOT give your audience all these scaling options;  these are here for you to experiment and see how the visualizations and ranking change based on what scales you use.  The only option I would present to your audience is the ability to toggle back and forth between percentages and scores.

May 112015
 

Much thanks to Susan Ferrari for exposing me to the concept of Net Promoter Score, Susan Baier for encouraging me to blog about it, and Helen Lindsey for providing anonymized NPS data.

Overview

My wife and I recently went out to a restaurant to celebrate our anniversary.  Accompanying the check was a survey card with three questions, one of which looked like this.

Figure 1 -- The classic Net Promoter Score question

Figure 1 — The classic Net Promoter Score question

We both agreed that the restaurant was very good, if not excellent, and that we would indeed recommend it to friends.  My wife suggested we circle the “8”.

I told her that if we were enthusiastic about recommending the restaurant we should give it a “9” as a 7 or 8 would be tabulated as a “neutral” or “passive” response.

She looked at me quizzically and asked why an “8” would be considered neutral.

I then explained how the Net Promoter Score works.

Understanding the Score

Respondents are presented with the question “Using a scale from 0 to 10, would you recommend this product / service to a friend or colleague?”

  • Anyone that responds with a 0 through 6 is considered a Detractor.
  • Anyone that responds with a 7 or 8 is considered a Passive (or Neutral).
  • Anyone that responds with a 9 or 10 is considered a Promoter.

The Net Promoter Score (NPS) is computed by taking the percentage of people that are Promoters, subtracting the percentage of people that at Detractors, and multiplying that number by 100.

How to compute NPS, courtesy B2B International.

Figure 2 — How to compute NPS, courtesy B2B International.

If you are like me (and my wife) you’re probably thinking that a “6” is a pretty good score and that it shouldn’t be bunched among the detractors.

I’m not going to get into a debate about NPS methodology and its usefulness, but I do want to show you some good ways to visualize NPS data.

The Problem with the Traditional Presentation

Consider this snippet of NPS survey data with responses about different companies from people in different roles.

Figure 3 -- Raw NPS data about different companies from people with different occupations.

Figure 3 — Raw NPS data about different companies from people with different occupations.

If we just focus on the NPS and not the components that comprise the NPS we can produce an easy-to-sort bar chart like the one shown here.

Figure 4 -- Traditional way to show NPS

Figure 4 — Traditional way to show NPS

Yes, it’s easy to see the company D has a much higher NPS than company H, but by not showing the individual components – and in particular the Neutrals / Passives –  we’re missing an important part of the story as the Neutrals / Passives are right on the cusp of becoming promoters.

For example, a Net Promoter Score of 40 can come from

  • 70% Promoters and 30% Detractors
  • 45% Promoters, 50% Passives, 5% Detractors

Same score, big difference in makeup.

An Alternative Approach to Displaying NPS Results

Consider the dashboard below which presents the data as a divergent stacked bar chart.

Figure 5 -- NPS dashboard with toggle to show percentages and score.

Figure 5 — NPS dashboard with toggle to show percentages and score.

The chart is easy to sort and you can also see that Company B and Company F have a relatively large group of Neutrals.

That said, being able to see the NPS score is very useful so the dashboard (see working version at the end of this post) has a toggle that switches between percentages and the score, as shown below.

Figure 6 -- Divergent stacked bar chart with NPS overlay.

Figure 6 — Divergent stacked bar chart with NPS overlay.

Note that the NPS divergent stacked bar chart is just a variation on a Likert scale divergent stacked bar chart.  You can find an explanation of how to build this type of visualization here.

What’s Next?

We now have what I think is a more insightful way to visualize Net Promoter Score data.

But clients and readers of my blog have asked me to address some of these questions as well:

  • How do you show the difference in NPS, or just the difference in percentage of promoters, between this quarter and the previous quarter?
  • If there is a difference, is the difference statistically significant?
  • What’s a good way to visualize and analyze NPS over time?

I will be addressing these issues in an upcoming post.  Stay tuned.