Aug 232017
 

Why you may be missing important insights if you only look at Percent Top 2 Boxes.

August 23, 2017

Overview

Anyone who has looked to this blog for insights into visualizing survey data knows that my “go to” visualization for Likert scale sentiment data is a divergent stacked bar chart (Figure 1).

Figure 1 -- Divergent stacked bar chart for a collection of 5-point Likert scale questions

Figure 1 — Divergent stacked bar chart for a collection of 5-point Likert scale questions

You might prefer grouping all the positives and negatives together, showing only a three-point scale.  Or perhaps you question having the neutrals “straddle the fence” as it were, with half being positive and half being negative.  These are fair points that I’m happy to debate at another time as right now I want to focus on what happens when we need to compare survey results between two periods.

Showing responses for more than one period

As much as I love the divergent stacked bar chart, it can become a little difficult to parse when you show more than one period for more than one question. Consider the chart below where we compare results for 2017 vs. 2016 (Figure 2).

Figure 2 -- Showing responses for two different periods

Figure 2 — Showing responses for two different periods

As comfortable as I am with seeing how sentiment skews positive or negative with a divergent stacked bar chart, I’m at a loss to compare the results across two different years. The only thing that really stands out is that there appears to be a pretty big difference between 2017 vs. 2016 for “Really important issue 7” at the bottom of the chart.

The allure of Percent Top 2 Boxes

It’s times like these when focusing on the percentage of respondents that selected Strongly agree or Generally agree (Percent Top 2 Boxes) is very tempting.  Consider the connected dot plot in Figure 3.

Figure 3 -- Connected dot plot showing difference between Percent Top 2 Boxes in 2017 and 2016.

Figure 3 — Connected dot plot showing difference between Percent Top 2 Boxes in 2017 and 2016.

Hey, that’s clear and easy to read. Indeed, this is one of my recommended approaches for comparing Importance vs. Satisfaction and it works great for comparing results across two time periods.

So, we’re all done, right?

Not so fast. While this approach will work in many cases, you should never stop exploring as there may be something important that remains hidden when you only show Percent Top 2 Boxes.

It’s not the economy, it’s the neutrals (stupid)

I was recently working with a client who had surveyed a large group about several contentious topics. The client believed that, much like the population of the United States, the surveyed population had become more polarized over the past year, at least with respect to these survey topics.

In reviewing the results for three questions, if we just focus on the positives (Percentage Top 2 Boxes) things look like they have improved (Figure 4.)

Figure 4 -- Connected dot plot showing change in positives (Percentage Top 2 Boxes) between 2017 and 2016.

Figure 4 — Connected dot plot showing change in positives (Percentage Top 2 Boxes) between 2017 and 2016.

See? We have more positives (green) now than we did a year ago (gray.)

This may be true, but it only tells part of the story.

Consider the divergent stacked bar chart shown in Figure 5.

Figure 5 -- 5-Point Divergent stacked bar char comparing results from 2017 and 2016

Figure 5 — 5-Point Divergent stacked bar char comparing results from 2017 and 2016

Woah… there’s something very interesting going on here, but it’s very hard to see.  Maybe if we combine all the positives and negatives the “ah-ha” will be easier to decipher (Figure 6).

Figure 6 -- 3-Point Divergent stacked bar char comparing results from 2017 and 2016. There are big differences between the two time periods, but they are hard to see.

Figure 6 — 3-Point Divergent stacked bar char comparing results from 2017 and 2016. There are big differences between the two time periods, but they are hard to see.

Well, that’s a little better, but the story — and it’s a really big story — is still hidden. Let’s see what happens if we abandon both the connected dot plot and divergent stacked bar chart and instead try a slopegraph (actually, a distributed slopegraph, Figure 7).

Figure 7 -- Distributed slopegraph showing change in positives, neutrals, and negatives.

Figure 7 — Distributed slopegraph showing change in positives, neutrals, and negatives.

Now we can see it!  Just look at the gray lines showing the dramatic change in neutrals.  My client’s hunch was correct — the population has become much more polarized as the percentage of neutrals have plummeted while the percentage of people expressing both positive and negative sentiment has increased. You cannot see this at all with the connected dot plot and it’s hard to glean from the divergent stacked bar chart.

There is no, one best chart for every situation

I had the good fortune to attend one of Cole Nussbaumer Knaflic’s Storytelling with Data workshops. She uses a wonderful metaphor in describing how much work it can take to present just one, really good finding. I paraphrase:

“You have to shuck a lot of oysters to find a single pearl. In your presentations, don’t show all the shells you shucked; just show the pearl.”

For this last example, if I only had 30 seconds of the chief stakeholder’s time I would just show the distributed slopegraph as it is the “pearl.”  It clearly and concisely imparts the biggest finding for the data set: the population has become considerably more polarized for all three issues.

But…

What happens if the chief stakeholder wants to know more? I would be armed with an interactive dashboard to answer questions like these:

“The people that disagree… how many of them strongly disagree?”

“The people that agree… how many of them strongly agree?”

“Are these findings consistent across the entire organization, or only in some areas?”

Conclusion

So, when showing changes in sentiment over time, which chart is best? The connected dot plot? The divergent stacked bar chart? The distributed slopegraph?

To quote my fellow author of the Big Book of Dashboards, Andy Cotgreave, “it depends.”

You should be prepared to apply all three approaches and choose the one that imparts the greatest understanding with the least amount of effort.

Note

I’ve had a number of debates with people about how I prefer to handle neutrals (half on the negative side and half on the positive side). If you find that troubling you can place the neutrals to one side, as shown in Figure 8.

Figure 8 -- Neutrals placed to one side providing a common baseline for comparison.

Figure 8 — Neutrals placed to one side providing a common baseline for comparison.

May 102017
 

May 10, 2017

Overview

Most organizations want to wildly exceed customer expectations for all facets of all their products and services, but if your organization is like most, you’re not going to be able to do this. Therefore, how should you allocate money and resources?

First, make sure you are not putting time and attention into things that aren’t important to your customers and make sure you satisfy customers with the things that are important.

One way to do this is to create a survey that contains two parallel sets of questions that ask customers to indicate the importance of certain features / services with how satisfied they are with those products and services.  A snippet of what this might look like to a survey taker is shown in Figure 1.

Figure 1 -- How the importance vs. satisfaction questions might appear to the person taking the survey.

Figure 1 — How the importance vs. satisfaction questions might appear to the person taking the survey.

How to Visualize the Results

I’ve come up with a half dozen ways to show the results and will share three approaches in this blog post.  All three approaches use the concept of “Top 2 Boxes” where we compare the percentage of people who indicated Important or Very Important (the top two possible choices out of five for importance) and Satisfied or Very Satisfied (again, the top two choices for Satisfaction).

Bar-In-Bar Chart

Figure 2 shows a bar-in-bar chart, sorted by the items that are most important.

Figure 2 -- Bar-in-bar chart

Figure 2 — Bar-in-bar chart

This works fine, as would having a bar and a vertical reference line.

It’s easy to see that we are disappointing our customers in everything except the least important category and that the gap between importance and satisfaction is particular pronounced in Ability to Customer UI (we’re not doing so well in Response Time, 24-7 Support, and East of Use, either.)

Scatterplot with 45-degree line

Figure 3 shows a scatterplot that compares the percent top 2 boxes for Importance plotted against the percent top 2 boxes for Satisfaction where each mark is a different attribute in our study.

Figure 3 -- Scatterplot with 45-degree reference line

Figure 3 — Scatterplot with 45-degree reference line

The goal is to be as close to the 45-degree line as possible in that you want to match satisfaction with importance. That is, you don’t want to underserve customers (have marks below the line) but you probably don’t want to overserve, either, as marks above the line suggest you may be putting to many resources into things that are not that important to your customers.

As with the previous example it’s easy to see the one place where we are exceeding expectations and the three places where we’re quite a bit behind.

Dot Plot with Line

Of the half dozen or so approaches the one I like most is the connected dot plot, shown in Figure 4.

Figure 4 -- Connected dot plot. This is the viz I like the most.

Figure 4 — Connected dot plot. This is the viz I like the most.

(I placed “I like most” in italics because all the visualizations I’ve shown “work” and one of them might resonate more with your audience than this one.  Just because I like it doesn’t mean it will be the best for your organization so get feedback before deploying.)

In the connected dot plot the dots show the top 2 boxes for importance compared to the top 2 boxes for satisfaction.  The line between them underscores the gap.

I like this viz because it is sortable and easy to see where the gaps are most pronounced.

But what about a Divergent Stacked Bar Chart?

Yes, this is my “go to” viz for Likert-scale things and I do in fact incorporate such a view in the drill-down dashboard found at the end of this blog post. I did in fact experiment with the view but found that while it worked for comparing one feature at a time it was difficult to understand when comparing all 10 features (See Figure 5.)

Figure 5 -- Divergent stacked bar overload (too much of a good thing).

Figure 5 — Divergent stacked bar overload (too much of a good thing).

How to Build This — Make Sure the Data is Set Up Correctly

As with everything survey related, it’s critical that the data be set up properly. In this case for each Question ID we have something that maps that ID to a human readable question / feature and groups related questions together, as shown in Figure 6.

Figure 6 -- Mapping the question IDs to human readable form and grouping related questions

Figure 6 — Mapping the question IDs to human readable form and grouping related questions

Having the data set up “just so” allows us to quickly build a useful, albeit hard to parse, comparison of Importance vs. Satisfaction, as shown in Figure 7.

Figure 7 -- Quick and dirty comparison of importance vs. satisfaction.

Figure 7 — Quick and dirty comparison of importance vs. satisfaction.

Here we are just showing the questions that pertain to Importance and Satisfaction (1). Note that measure [Percentage Top 2 Boxes] that is on Columns (2) is defined as follows.

Figure 8 -- Calculated field for determining the percentage of people that selected the top 2 boxes.

Figure 8 — Calculated field for determining the percentage of people that selected the top 2 boxes.

Why >=3?  It turns out that the Likert scale for this data went from 0 to 4, so here we just want to add up everyone who selected a 3 or a 4.

Not Quite Ready to Rock and Roll

This calculated field will work for many of the visualizations we might want to create, but it won’t work for the scatterplot and it will give us some headaches when we attempt to add some discrete measures to the header that surrounds our chart (the % Diff text that appears to the left of the dot plot in Figure 4.) So, instead of having a single calculation I created two separate calculations to compute % top 2 boxes Importance and % top 2 boxes Satisfaction. The calculation for Importance is shown in Figure 9.

Figure 9 -- Calculated field for determining the percentage of folks that selected the top two boxes for Importance.

Figure 9 — Calculated field for determining the percentage of folks that selected the top two boxes for Importance.

Notice that we have all the rows associated with both the Importance questions and Satisfaction “in play”, as it were, but we’re only tabulating results for the Importance questions so we’re dividing by half of the total number of records.

We’ll need to create a similar calculated field for the Satisfaction questions.

Ready to Rock and Roll

Understanding the Dot Plot

Figure 10 shows what drives the Dot Plot (we’ll add the connecting line in a moment.)

Figure 10 -- Dissecting the Dot Plot.

Figure 10 — Dissecting the Dot Plot.

Here we see that we have a Shape chart (1) that will display two different Measure Values (2) and that Measure Names (3) is controlling Shape and Color.

Creating the Connecting Line Chart

Figure 11 shows how the Line chart that connects the shapes are built.

Figure 11 -- Dissecting the Line chart

Figure 11 — Dissecting the Line chart.

Notice that Measure Values is on Rows a second time (1) but the second instance the mark type is a Line (2) and that the end points are connected using the Measure Names on the Path (3).  Also notice that there is no longer anything controlling the Color as we want a line that is only one color.

Combining the Two Charts

The only thing we need to do now is combine the two charts into one by making a dual axis chart, to synchronize the secondary axis, and hide the secondary header (Figure 12.)

Figure 12 -- the Completed connected Dot Plot.

Figure 12 — the Completed connected Dot Plot.

What to Look for in the Dashboard

Any chart that answers a question usually fosters more questions. Consider the really big gap in Ability to Customize UI. Did all respondents indicate this, or only some?

And if one group was considerably more pronounced than others, what were the actual responses across the board (vs. just looking at the percent top 2 boxes)?

Figure 13 -- Getting the details on how one group responded

Figure 13 — Getting the details on how one group responded

The dashboard embedded below shows how you can answer these questions.

Got another approach that you think works better?  Let me know.