May 102017
 

May 10, 2017

Overview

Most organizations want to wildly exceed customer expectations for all facets of all their products and services, but if your organization is like most, you’re not going to be able to do this. Therefore, how should you allocate money and resources?

First, make sure you are not putting time and attention into things that aren’t important to your customers and make sure you satisfy customers with the things that are important.

One way to do this is to create a survey that contains two parallel sets of questions that ask customers to indicate the importance of certain features / services with how satisfied they are with those products and services.  A snippet of what this might look like to a survey taker is shown in Figure 1.

Figure 1 -- How the importance vs. satisfaction questions might appear to the person taking the survey.

Figure 1 — How the importance vs. satisfaction questions might appear to the person taking the survey.

How to Visualize the Results

I’ve come up with a half dozen ways to show the results and will share three approaches in this blog post.  All three approaches use the concept of “Top 2 Boxes” where we compare the percentage of people who indicated Important or Very Important (the top two possible choices out of five for importance) and Satisfied or Very Satisfied (again, the top two choices for Satisfaction).

Bar-In-Bar Chart

Figure 2 shows a bar-in-bar chart, sorted by the items that are most important.

Figure 2 -- Bar-in-bar chart

Figure 2 — Bar-in-bar chart

This works fine, as would having a bar and a vertical reference line.

It’s easy to see that we are disappointing our customers in everything except the least important category and that the gap between importance and satisfaction is particular pronounced in Ability to Customer UI (we’re not doing so well in Response Time, 24-7 Support, and East of Use, either.)

Scatterplot with 45-degree line

Figure 3 shows a scatterplot that compares the percent top 2 boxes for Importance plotted against the percent top 2 boxes for Satisfaction where each mark is a different attribute in our study.

Figure 3 -- Scatterplot with 45-degree reference line

Figure 3 — Scatterplot with 45-degree reference line

The goal is to be as close to the 45-degree line as possible in that you want to match satisfaction with importance. That is, you don’t want to underserve customers (have marks below the line) but you probably don’t want to overserve, either, as marks above the line suggest you may be putting to many resources into things that are not that important to your customers.

As with the previous example it’s easy to see the one place where we are exceeding expectations and the three places where we’re quite a bit behind.

Dot Plot with Line

Of the half dozen or so approaches the one I like most is the connected dot plot, shown in Figure 4.

Figure 4 -- Connected dot plot. This is the viz I like the most.

Figure 4 — Connected dot plot. This is the viz I like the most.

(I placed “I like most” in italics because all the visualizations I’ve shown “work” and one of them might resonate more with your audience than this one.  Just because I like it doesn’t mean it will be the best for your organization so get feedback before deploying.)

In the connected dot plot the dots show the top 2 boxes for importance compared to the top 2 boxes for satisfaction.  The line between them underscores the gap.

I like this viz because it is sortable and easy to see where the gaps are most pronounced.

But what about a Divergent Stacked Bar Chart?

Yes, this is my “go to” viz for Likert-scale things and I do in fact incorporate such a view in the drill-down dashboard found at the end of this blog post. I did in fact experiment with the view but found that while it worked for comparing one feature at a time it was difficult to understand when comparing all 10 features (See Figure 5.)

Figure 5 -- Divergent stacked bar overload (too much of a good thing).

Figure 5 — Divergent stacked bar overload (too much of a good thing).

How to Build This — Make Sure the Data is Set Up Correctly

As with everything survey related, it’s critical that the data be set up properly. In this case for each Question ID we have something that maps that ID to a human readable question / feature and groups related questions together, as shown in Figure 6.

Figure 6 -- Mapping the question IDs to human readable form and grouping related questions

Figure 6 — Mapping the question IDs to human readable form and grouping related questions

Having the data set up “just so” allows us to quickly build a useful, albeit hard to parse, comparison of Importance vs. Satisfaction, as shown in Figure 7.

Figure 7 -- Quick and dirty comparison of importance vs. satisfaction.

Figure 7 — Quick and dirty comparison of importance vs. satisfaction.

Here we are just showing the questions that pertain to Importance and Satisfaction (1). Note that measure [Percentage Top 2 Boxes] that is on Columns (2) is defined as follows.

Figure 8 -- Calculated field for determining the percentage of people that selected the top 2 boxes.

Figure 8 — Calculated field for determining the percentage of people that selected the top 2 boxes.

Why >=3?  It turns out that the Likert scale for this data went from 0 to 4, so here we just want to add up everyone who selected a 3 or a 4.

Not Quite Ready to Rock and Roll

This calculated field will work for many of the visualizations we might want to create, but it won’t work for the scatterplot and it will give us some headaches when we attempt to add some discrete measures to the header that surrounds our chart (the % Diff text that appears to the left of the dot plot in Figure 4.) So, instead of having a single calculation I created two separate calculations to compute % top 2 boxes Importance and % top 2 boxes Satisfaction. The calculation for Importance is shown in Figure 9.

Figure 9 -- Calculated field for determining the percentage of folks that selected the top two boxes for Importance.

Figure 9 — Calculated field for determining the percentage of folks that selected the top two boxes for Importance.

Notice that we have all the rows associated with both the Importance questions and Satisfaction “in play”, as it were, but we’re only tabulating results for the Importance questions so we’re dividing by half of the total number of records.

We’ll need to create a similar calculated field for the Satisfaction questions.

Ready to Rock and Roll

Understanding the Dot Plot

Figure 10 shows what drives the Dot Plot (we’ll add the connecting line in a moment.)

Figure 10 -- Dissecting the Dot Plot.

Figure 10 — Dissecting the Dot Plot.

Here we see that we have a Shape chart (1) that will display two different Measure Values (2) and that Measure Names (3) is controlling Shape and Color.

Creating the Connecting Line Chart

Figure 11 shows how the Line chart that connects the shapes are built.

Figure 11 -- Dissecting the Line chart

Figure 11 — Dissecting the Line chart.

Notice that Measure Values is on Rows a second time (1) but the second instance the mark type is a Line (2) and that the end points are connected using the Measure Names on the Path (3).  Also notice that there is no longer anything controlling the Color as we want a line that is only one color.

Combining the Two Charts

The only thing we need to do now is combine the two charts into one by making a dual axis chart, to synchronize the secondary axis, and hide the secondary header (Figure 12.)

Figure 12 -- the Completed connected Dot Plot.

Figure 12 — the Completed connected Dot Plot.

What to Look for in the Dashboard

Any chart that answers a question usually fosters more questions. Consider the really big gap in Ability to Customize UI. Did all respondents indicate this, or only some?

And if one group was considerably more pronounced than others, what were the actual responses across the board (vs. just looking at the percent top 2 boxes)?

Figure 13 -- Getting the details on how one group responded

Figure 13 — Getting the details on how one group responded

The dashboard embedded below shows how you can answer these questions.

Got another approach that you think works better?  Let me know.

Mar 082017
 

And… what did they choose?

March 8, 2017

Overview

I’ve discussed how to visualize check-all-that-apply questions in Tableau. Assuming your survey is coded as Yes = 1 and No = 0, you can fashion a sorted bar chart like this the one shown in Figure 1 using the following calculation.

SUM([Value]) / SUM(Number of Records)

The field [Value] would be 0 or 1 for each respondent that answered the question.

Figure 1 -- Visualizing a check-all-that-apply question

Figure 1 — Visualizing a check-all-that-apply question

I’ve also discussed how we can see break this down by various demographics (Gender, Location, Generation, etc.)

What I’ve not blogged about (until now) is how to answer the following questions:

  • How many people selected one item?
  • Two items?
  • Five items?
  • Of the people that only selected one item, what did they select?
  • Of the people that selected four items, what did they select?

Prior to the advent of LoD calculations this was doable, but a pretty big pain in the ass.

Fortunately, using examples that are “out in the wild” we can cobble together a compelling way to show the answers to these questions.

 

Visualizing How Many People Selected 1, 2, 3, N Items?

One of the best blog posts on Level-of-Detail expressions is Bethany Lyons’ Top 15 LoD Expressions.

It turns out the very first example discusses how figure out how many customers placed one order, how many placed two orders, etc.  This will give us exactly what we need to figure out how many people selected 1, 2, 3, N items in a check-all-that-apply question.

Here’s the calculation that will do the job.

Figure 2 -- The LoD calculation we'll need.

Figure 2 — The LoD calculation we’ll need.

This translates as “for the questions you are focusing on (and you better have your context filters happening so you are only looking at just the check-all-that-apply stuff), for each Resp ID, add up the values for all the questions people answered.”

Remember, the responses are 0s and 1s, so if somebody selected six things the SUM([Value]) would equal 6.

So, how do we use this?

The beautiful thing about using FIXED as our LoD keyword is that it allows us to turn the results into a dimension.  This means we can put How Many Selected on columns and CNTD(Resp ID) on rows and get a really useful histogram like the one shown in Figure 3.

Figure 3 -- Histogram showing number of respondents that selected 0 items, 1 item, 2 item, etc.

Figure 3 — Histogram showing number of respondents that selected 0 items, 1 item, 2 item, etc.

Notice the filter settings indicating that we only want responses to the check-all-that-apply questions. Further note that this filter has been added to the context which means we want Tableau to filter the results before computing the FIXED LoD calculation.

 

So, what did these People Select?

Okay, now we know how many people selected one item, two items, etc.

Just what did they select?

Because we set [How Many Selected] using the FIXED keyword we can use it like any other dimension.  That is, it will behave just like [Gender], [Location], and so on.

Borrowing from an existing technique (the visual ranking by category that I cited earlier) we can fashion a very useful dashboard that allows us to see some interesting nuances in the data.  For example, while Metabolism is ranked second overall with 70% of people selecting it, it ranked seventh among those that only selected one item (with only 4%), while 84% of people that selected four items selected it (Figure 4.)

Figure 4 -- Metabolism is ranked second overall with 70%, but only 4% of folks that chose one item selected it.

Figure 4 — Metabolism is ranked second overall with 70%, but only 4% of folks that chose one item selected it.

Similarly, check out the breakdown for Blood Pressure which is ranked third with 60% overall but is ranked first among folks that only measure one thing (Figure 5.)

Figure 5 -- Metabolism is ranked third overall with 60%, but was ranked first among those that only selected on item.

Figure 5 — Metabolism is ranked third overall with 60%, but was ranked first among those that only selected on item.

 

Other Useful Features of the Dashboard

The Marginal Histogram

The marginal histogram along the bottom of the chart shows you the breakdown of responses.

Figure 6 -- Marginal histogram shows distribution of responses

Figure 6 — Marginal histogram shows distribution of responses

Tool Tips Help Interpret the Findings

The ordinal numbers can be confusing as sometimes the number 2 means the number of items selected and other times it is the ranking.  Hovering over a bar explains how to interpret the results.

Figure 7 -- Tool tips help you interpret the results.

Figure 7 — Tool tips help you interpret the results.

Swap Among Different Dimensions

While this is first and foremost a blog post about showing how many people selected a certain number of items (and what they selected) it was very easy to add a parameter that allows you to swap among different dimensions.  In Figure 8 we see the break down by Location.

Figure 8 -- Use the Break Down by parameter to see rank and magnitude for the selected item among different dimensions, in this case Location.

Figure 8 — Use the Break Down by parameter to see rank and magnitude for the selected item among different dimensions, in this case Location.

Here’s the embedded workbook for you to try out and download.

Jan 152017
 

By Steve Wexler, January 15, 2017

Overview

Before going any further this post assumes you’ve gotten your data “just so”; that is, you’ve reshaped your data and have the responses in both text and numeric form.

If you’re not sure what this means, please review this post.

Taking inventory by finding the universe of all questions and all responses

Anyone who has attended one of my survey data classes or has watched my 2014 Tableau Conference presentation knows the first thing I like to do is assemble a demographics dashboard so I know who took the survey.

The second thing I do arose a few months ago when I had a “why didn’t I do this before” moment with respect to getting a good handle on questions, responses, and seeing if there was anything that was poorly coded.

Here’s how it works.

Note that I’m using the same sample data set that I use for my classes.

Figure 1 -- Screen shot after just having connected to the data.

Figure 1 — Screen shot after just having connected to the data.

  1. Drag Question Grouping onto rows, followed by Wording, and then Question ID, as shown below.

    02_PartialInventory

    Figure 2 — About half-way through taking inventory.

  2. Right-click the measure called Value and select Duplicate.
  3. Rename the newly-created field Value (discrete).
  4. Drag the measure into the Dimensions area. This will make Tableau treat the field as something that is by default discrete.

    Figure 3 -- Turning the measure into a discrete dimension

    Figure 3 — Turning the measure into a discrete dimension

  5. Drag new newly-created dimension to Rows.
  6. Drag Labels to Rows. Your screen should look like the one shown below.
Figure 4 -- All the questions and all the responses, all on one sheet.

Figure 4 — All the questions and all the responses, all on one sheet.

So, just what do we have here?

You can see from the portion of the screen that you have a bunch of questions about “Importance” and can also see that the possible values go from 0 to 4 where 0 maps to “Not at all important”, 1 maps to “Of Little Importance”, etc.

At this point you should be looking for any stray values, say a value of 5.

If you scroll down a little bit (Figure 5) you’ll see a question grouping called “Indicate the degree to which you agree” where you again have values of 0 through 4 but this time 0 maps to “Not at all”, 1 maps to “Small degree”, etc.

We should be pleased as it appears that our Likert questions consistently go from 0 through 4. This means we won’t have to craft multiple sets of calculated fields to deal with different numeric scales (not that having to do that would be a big deal).

05_Further Inventory

Figure 5 — Next set of questions.

At this point it might be useful to add a filter so you can focus on only certain question groups. You can do this by filtering by Question Grouping as shown below.

Figure 6 -- Adding a filter makes it easier to focus on specific question groupings

Figure 6 — Adding a filter makes it easier to focus on specific question groupings

Spotting questions that have coding errors

In case you’re wondering what a coding error looks like, see what happens if we just focus on the “What do you measure” questions, as shown below.

Figure 7 -- An example of a mis-coded check-all-that-apply question

Figure 7 — An example of a mis-coded check-all-that-apply question

So, for all of the check-all-that-apply questions the universe of possible values is 0 and 1. And with the exception of Question Q6 (Breathing), 0 maps to “No” and 1 maps to “Yes.”

The mis-coding of “Ni” instead of “No” will only present a problem if our calculated field for determining the percentage of people that checked an item were to use Labels instead of Values. My preferred formula for this type of calculation is this:

SUM([Value]) / SUM([Number of Records)])

Because we’re using [Value] instead of [Label], the miscoding for this example won’t cause a problem.

Conclusion

Creating a giant text table that maps all Question Groupings, Question IDs, Labels, and Values on a single sheet allows us to quickly take an inventory of all questions and possible responses.  This in turn allows us to see if questions are coded consistently (e.g., do all the Likert Scale questions use the same scale) and to see if there are any coding errors.

I just wish I had started doing this years ago.

 

 Posted by on January 15, 2017 2) Visualizing Survey Data, Blog Tagged with: ,  No Responses »
Nov 282016
 

Note: Major thanks to Nazirah Garrison and Christie Clark at Tableau for suggesting this approach.

Overview

With Tableau 10.x it is in fact possible to get your survey data, “just so” without having to invest in new tools and / or a engage in a time-consuming, error-prone procedure every time you need receive updated survey data.

There’s a lot of upside to this approach — everything is built into Tableau and you’ll just need to refresh the extracts (there will be several of them) as you get new data.  The downside is the setup is a little bit cumbersome and Tableau manifests some “hmm, that doesn’t seem right” behavior along the way.

Before embarking on this I encourage you read this post so you understand what I mean when I refer to the data being “just so.”

Also, if you would like to follow along you can download the source data here.

Three files and three extracts

Anyone that has read my posts or attended my classes know that I want survey responses in both text and numeric formats. Sure, you can in fact manage with one format or the other, but you’re just creating a LOT more work for yourself if you don’t have the data in both formats.

You will also need what I call a “Helper” file — this is just a separate file that maps each question ID into human readable form and groups related questions together. Again, you can certainly get by without it but you’ll be working much harder than you need to, especially if you ever compare “Importance” with “Satisfaction” Likert-scale questions.

For this example we will create a separate extract for each of these three files, then use Tableau 10.x’s ability to join three different data sources.

Let’s start by creating the extracts.

To pivot the data labels and create an extract

  1. Start Tableau 10.x and indicate you want to connect to Excel.
  2. Connect to DR_SurveySampleData_SourceFiles_Fall2016.xlsx and drag the sheet Data Labels into the Drag sheets here area as shown below.
    01_draglabels
  3. Leave the first five columns intact (the Resp ID, demographic stuff, and Weight) and select all the other columns.
  4. Click any of the selected columns and select Pivot from the context menu.
    02_firstpivot
  5. Rename the first pivoted column Question ID and the second column Label as shown below.
    03_rename
  6. Indicate that you want to create an Extract (look for the radio button towards the upper right of the screen) and then click a sheet in your workbook to generate the Extract.
  7. When asked to save, name the file DR_JustSo_Labels.tde (make sure to note where you are saving the file).

One down, two to go.

To pivot the data numbers and create an extract

  1. Click the New Data Source icon and indicate you want to connect to an Excel file.
  2. As before, connect to DR_SurveySampleData_SourceFiles_Fall2016.xlsx, but this time drag Data Numbers to the Drag sheet here area.
  3. Hide the columns labeled Gender, Location, Generation, and Weight — we already have them in the other data source and don’t need them twice.
  4. Leaving Resp ID in place, select the second through the last columns.
  5. Click the down arrow on any of the selected columns and select Pivot from the context menu.
  6. Rename the first pivoted column Question ID and the second column Value, as shown below.
    04_secondpivot
  7. Indicate that you want to create an Extract (look for the radio button towards the upper right of the screen) and then click a sheet in your workbook to generate the Extract.
  8. When asked to save, name the file DR_JustSo_Numbers.tde.

Two down, one to go.

To create the question helper extract

  1. Click the New Data Source icon and indicate you want to connect to an Excel file.
  2. As before, connect to DR_SurveySampleData_SourceFiles_Fall2016.xlsx, but this time drag Question Helper to the Drag sheet here area.
  3. Indicate that you want to create an Extract (look for the radio button towards the upper right of the screen) and then click a sheet in your workbook to generate the Extract.
  4. When asked to save, name the file DR_QuestionHelper.tde.

All three data sources are now ready.

Joining the three data sources together

We now have our three data sources as separate Tableau extract files. We’ll combine these three files (and create an extract from the joined files) using Tableau 10.x’s ability to join files from different data sources.

Note: This is where we’ll encounter Tableau’s “head-scratching” behavior.

To join the three data sources

Note: In first trying this Tableau presented a lot of warning messages about not being able to materialize a temporary table. While I could ignore these warnings and muddle through, you may not be so lucky. It turns out the culprit was my anti-virus software. I temporarily disabled it and everything worked without a hitch.  See http://kb.tableau.com/articles/issue/error-unable-to-materialize-temporary-table-joining-data-sources.

  1. Click the New Data Source icon and indicate you want to connect to More, as shown below.
    05_more
  2. Select DR_JustSo_Labels.tde and click Open.
  3. Click Add, as shown below.
    06_add
  4. Click More and then select DR_JustSo_Numbers.tde. Do not be fooled, the correct fields have NOT yet been joined.
  5. Click the overlapping Venn diagram to display the Join dialog box, as shown below.
    07_join1
  6. Click Number of Records and then click the X that appears in the row to indicate you do NOT want to join these two data sources using this field.
  7. From the left data source indicate you want to join using Resp ID, as shown below.
    08_join2
  8. From the right data source indicate that you want to join using RespID (Extract1). I have no idea why the field is named this way. More on this in a moment.
  9. From the left data source indicate that you want to join using Question ID. Yes, you need to join on more than one field.
  10. From the right data source indicate you want to join using Pivot Field Names (Extract1). Okay… THIS is the thing that has me scratching my head and as of this writing (November 27, 2016) I have no idea why the field isn’t also called Question ID.

    January 4, 2017 — Now I know why this is happening. When using data extracts (.TDE files) Tableau is only able to keep track of the alias names for fields in the first .TDE file.  All of that info gets stripped out from the second .TDE file and Tableau just sees the original field name (Pivot Field Names). A little off-putting, yes, easy to address and it won’t matter a lick once we’re building our visualizations.

  11. Click Add again, select More, and select DR_QuestionHelper.tde.
  12. Click the second Venn overlapping circle and remove any of the joins that may be in there (most likely again using Number of Records).
  13. From the left data source select Question ID and from the right data source select QuestionID as shown below.
    09_join3
  14. Hide the fields RespID (Extract1), both Number of Records fields, the second Question ID, and Pivot Field Names, and rename Pivot Field Values to Value. Your screen should look like this.
    10_joinall
  15. Indicate that you want to create an Extract (look for the radio button towards the upper right of the screen) and then click a sheet in your workbook to generate the Extract.
  16. When asked to save, name the file DR_JustSoAll.tde. After creating the extract Tableau will show all the field names grouped by data source, as shown below.
    11_datasource
  17. Click the down arrow and select Group by Folder.
    Okay, you don’t have to do this but I see no reason to group the fields by data source.

Congratulations, you now have your data “just so” and you did it all in Tableau.

So, how do you get the extracts to refresh?

Good question.  If you are using either Tableau Server or Online you can create shared data sources and program the extract to refresh on a regular or as-needed basis.

If you are using desktop — and you have all four data sources in in one workbook — you can just click the  Data menu and select Refresh All Extracts.

Conclusion

While I find the process outlined here both cumbersome and confusing (what is up with those renamed fields NOT staying renamed?) this approach does appear to work and you only need to set it up once. The same cannot be said of The Tableau add-in for Excel which requires a lot of manual intervention every time you want to update the data.

Will this replace Alteryx as my tool of choice? No, but it does work and you can’t beat the price.

 Posted by on November 28, 2016 2) Visualizing Survey Data, Blog Tagged with: , , , ,  41 Responses »
Oct 172016
 

Overview

So, you’ve created a wonderful collection of survey data dashboards that have innumerable demographic filters so that users can, for example, just see responses from left-handed Los Angeles Lakers fans between the ages of 34 and 39.

So, what’s the problem?

Actually, there are two problems. The first occurs when extreme filtering reduces the number of responses so much that the results are statistically meaningless. The second is that you may inadvertently allow people to “glean” who has answered a survey.  For example, if you conduct a salary survey you want participants to be assured that nobody will be able to see individual responses. But if you have too many filters it may be possible to winnow down the results so you can guess who provided the answer.

Fortunately, it’s easy to set up a graceful way to suppress a chart and display an error message in its place when the “n” count gets too low.

How it works

Consider the dashboard shown in Figure 1.  Notice that the upper right corner shows that with nothing filtered there are a total of 350 responses.

01_checkall

Figure 1 — A simple dashboard showing results for a check-all-that-apply question.

Also notice there’s a parameter control that allows you to specify the cut-off point for displaying the visualization.

Now let’s see what happens if we use the filters to winnow down the number of responses to the point that there are fewer than 20 (Figure 2.)

Figure 2 -- With too few responses the bar chart is suppressed and the warning message is displayed.

Figure 2 — With too few responses the bar chart is suppressed and the warning message is displayed.

What’s happening here is that there are two floating charts both with similar filters that looks at how many survey responses there are. The bar chart in Figure 1 is set to appear if the number of responses is greater than or equal to 20. The warning message (it’s just a Tableau worksheet) appears if the number of responses is fewer than 20.

How the filters work

Let’s look first at what drives the bar chart (Figure 3.)

Figure 3 -- Pill settings and filters for the bar chart.

Figure 3 — Pill settings and filters for the bar chart.

Notice in particular there is a field called [Minimum Count] that is on Filters card and that it is set to True. The field [Minimum Count] is defined as follows:

Figure 4 -- How [Minimum Count] is defined.

Figure 4 — How [Minimum Count] is defined.

Here [Count Threshold] is the fill-in-the-blank parameter (currently set to 20).

So, the visualization will only appear if there are at least 20 responses; otherwise the filter “kills” the viz and the only thing we see is the title.

Now, how does the secondary visualization work?  Let’s have a look (Figure 5.)

Figure 5 -- Pill and filter settings for the "warning" visualization.

Figure 5 — Pill and filter settings for the “warning” visualization.

Notice that [Minimum Count] is also on the Filter card but is set to False. We’re seeing the viz (the red message) because the filters in place result in fewer than 20 responses.

And just what is producing the message? It’s the field [Too Few] that’s been placed on the Text button on the Marks card. The field is defined as follows.

Figure 6 -- Definition of the field [Too Few].

Figure 6 — Definition of the field [Too Few].

Why use floating elements?

We certainly could cram the two visualizations into a container and make sure that when one is displayed the other only takes up a few pixels.  I elected to go with the floating approach but made sure that the secondary viz was set to be in back of the primary viz.

Is that all there is to it?

For a check-all-that-apply question, that’s all you need to know, but some question and visualization types may need different approaches.

Consider Figure 7 where we see a jitterplot comparing salary data for men and women where each dot represents a response from an individual survey participant.

Figure 7 -- Pill and filter configuration for a jitterplot visualizing salary data.

Figure 7 — Pill and filter configuration for a jitterplot visualizing salary data.

Notice that we have a different field (one that uses a Table calculation) on the Filters card.

There’s also a very different setup to display the warning message for this visualization as we cannot simply base this on SUM([Number of Records]).

I will leave it to the reader to explore how this these are set up.  Just let me know when you come up with a better approach.

Conclusion

If you’re visualizing survey data and giving users filters you should come up with a game plan for what to do if there are too few responses. In this blog post (and the embedded, downloadable workbook) I present two approaches for two types of survey questions. Other questions types (for example, Likert-scale questions) will need some modifications to what I’ve presented here.

May 102016
 

Note: Special thanks to Joe Mako who came up with a considerably more elegant and extensible solution than I had first built. I describe Joe’s approach in this post.

Overview

I’m fortunate that when I have to work with survey data that is not well-coded I can take out my trusty copy of Alteryx Designer and get my data “just so.”

But what if you don’t have a copy of Alteryx? Or what if you want to use Qualtrics’ new Web Data Connector and don’t have the ability to insert a data prep tool within your workflow?

In this blog post we’ll see how you can get Tableau to address the problem.  Specifically, we’ll see how to work with check-all-that-apply questions where the data has been coded as 1s and blanks instead of 1s and 0s.

Note: While the example shown focuses on Qualtrics data the approach will work with any tool that leaves blanks for respondents that did not select an item (e.g., Survey Gizmo).

So, What’s Wrong with Blanks?

Consider the data snippet below that shows responses to a group of check-all-that-apply questions. This is what I consider to be badly-coded data.

Figure 1 -- Check all that apply questions as they might appear in a CSV file downloaded from Qualtrics

Figure 1 — Check all that apply questions as they might appear in a CSV file downloaded from Qualtrics

Now here’s how I would prefer the data to be coded.

Figure 2 -- Correctly-coded check all that apply questions

Figure 2 — Correctly-coded check all that apply questions

So, why is this better?

By having 1s and 0s we can see, easily, if some respondents didn’t answer the question group at all. In the figure below we can see that respondents 1002, 1006, 1011, and 1016 did not make any selection for any of the check-all-that-apply questions.  More importantly, with data coded with 1s and 0s, Tableau we can just filter out the Nulls and we’ll get accurate results.

 Figure 3 -- Respondents that didn't answer this question group are highlighted in yellow


Figure 3 — Respondents that didn’t answer this question group are highlighted in yellow

IMPORTANT: Before going any further, make note that respondents 1002, 1006, 1011, and 1016 did not make any selections and that of the 13 people that actually responded to Q1_4,  12 selected that option.  There will be a test.

So, given our constraints, how are we going to come up with a way to distinguish between a blank that means a respondent didn’t select a particular option and a blank that, when combined with three other blanks, means the person didn’t respond to the group of questions at all?

Before we plow into this I want to make sure people realize that we will soon pivot / reshape the data so that it is tall. Specifically, instead of dealing with this…

Figure 4 -- Data before reshaping (as you would get it if you downloaded a .CSV file.)

Figure 4 — Data before reshaping (as you would get it if you downloaded a .CSV file.)

… we’re going to be working with this:

Figure 5 -- Reshaped data using Tableau's built-in Pivoting feature. If you're using the Qualtrics web data connector you don't need to use this.

Figure 5 — Reshaped data using Tableau’s built-in Pivoting feature. If you’re using the Qualtrics web data connector you don’t need to use this.

Once we pivot the data we’ll see the following collection of dimensions and measures in Tableau.

Figure 6 -- Reshaped data as it appears in Tableau

Figure 6 — Reshaped data as it appears in Tableau

Note: For this example I’m not going to the trouble of creating a data set that has the survey responses in both numeric and text format. The technique we explore will work equally well for when you have both as when you just have numeric values.

Grouping the related questions

To be able to come up with an elegant, extensible way to tell Tableau “don’t take into account questions where people didn’t make any selections” we’re going to need a way to group related questions together.

Note that the technique I’m about to propose is different from the ad-hoc grouping approach I describe in Using Qualtrics’ Web Data Connector and Reshaping Survey Data with Tableau 9.0.  The programmatic “group” we create here needs to be referenced in a calculated field, and you can’t do that with an ad-hoc group.

Here we’re going to take advantage of Qualtrics’ Question ID naming convention and how related questions all look like this:

Q(N)_

That is, a group of related check-all-that-apply questions might be named like this:

Q1_1
Q1_2

Q1_8

And a group of related Likert scale questions might be named like this:

Q7_1
Q7_2

Q7_13

Given this standardization, we need a mechanism that will group everything to the left of the underscore. There are several ways we can achieve this, but for this example we’ll create a new field called “Question Prefix” and define it as follows:

Figure 7 -- Calculated field to that will group related questions based on whatever comes before the underscore in the Question ID field.

Figure 7 — Calculated field to that will group related questions based on whatever comes before the underscore in the Question ID field.

Now let’s use this field and the Resp ID field to see who did not respond at all to a particular group of questions.

Figure 8 -- This collection of fields show us who did not respond at all to a particular group of questions.

Figure 8 — This collection of fields show us who did not respond at all to a particular group of questions.

So, have a look at results in the image above where we place MAX(Value) on the text button. What’s happening is that for each respondent we’re looking at the answers to Q1_1, Q1_2, Q1_3, and Q1_4 and taking the maximum value across all those questions.  Notice that there are four blanks / Nulls for 1002, 1006, 1011, and 1016 — these are the four respondents that didn’t answer the question group at all (see, I told you there would be a test.)

We do NOT want to include these people when we calculate the percentage of people that selected a particular choice, so we can place MAX(Value) on the Filters shelf and indicate that we should not include Null values, as shown below.

Figure 9 -- Filter that will exclude Null values.

Figure 9 — Filter that will exclude Null values.

Applying this filter will show us responses from people that did NOT skip the question, as shown below.

Figure 10 -- Respondents that selected at least one option.

Figure 10 — Respondents that selected at least one option.

Okay, this is great, but the visualization we want to create shouldn’t look anything like this. Indeed, we have too many things on the level of detail at this point.

So what are we going to do?

Level-of-Detail Calculations to the Rescue

Anytime you think “I need Tableau to consider some dimensions when making a calculation, but those dimensions can’t be in the visualization” a giant alarm bell should go off in your head — this is precisely the use case for a LoD calculation!

Here’s the LoD calc, that when placed on the Filters shelf, will do the trick:

Figure 11 -- LoD calc that allows us to exclude folks that skipped answering a question group.

Figure 11 — LoD calc that allows us to exclude folks that skipped answering a question group.

A translation into English would be:

Even though [Question Prefix] and [Resp ID] may not be on the viz detail, take that combination of these two fields and find the MAX([Value]). Then find only the Null values from this and exclude them.

Note that we are using FIXED instead of INCLUDE as the former will automatically create a dimension and will work no matter what other filter settings may be in place.

Putting This Altogether

First we need to create a calculation that gives us the percentage of people that selected an option from the group of check-all-that-apply questions. For this instance the following calculation will work well.

Figure 12 -- How to determine the percentage of people that selected an option.

Figure 12 — How to determine the percentage of people that selected an option.

A quick translation into English would be:

Add up everybody that answered “1” (meaning they selected an option) and divide by the number of people that answered the question.

Let’s see if this works. Consider the visualization shown below.

Figure 13 -- First attempt at showing the percentage of respondents that selected an option. This is NOT showing the correct percentage.

Figure 13 — First attempt at showing the percentage of respondents that selected an option. This is NOT showing the correct percentage.

Does this pass the “smell” test? Look at Q1_4. You may recall that we saw that 12 out of 13 people selected this option — but 12 out of 13 is 92% not, 71%.  Indeed, if we place SUM([Number of Records]) as a discrete measure on the rows shelf we see that Tableau believes 17 people responded to the question.

Figure 14 -- Without a filter Tableau believes 17 people responded to this group of questions

Figure 14 — Without a filter Tableau believes 17 people responded to this group of questions

We need to put our LoD calc on the Filters shelf and set it to True.

Figure 15 -- Removing "All of these are Null" from the respondent pool gives us the correct count of 13 and the correct percentage (12 out of 13).

Figure 15 — Removing “All of these are Null” from the respondent pool gives us the correct count of 13 and the correct percentage (12 out of 13).

Conclusion

How check-all-that-apply questions are coded is a problem with many survey tools including Qualtrics and Survey Gizmo. If you have a good data preparation tools like Alteryx you can fix the problem, but if you don’t have such a tool or don’t want to insert something into your workflow then you can get Tableau to address the problem with a Level-of-Detail calculation.

Mar 172016
 

Overview

I’m a big fan of Andy Kriebel’s and Andy Cotgreave’s Makeover Monday challenge. For those of you not familiar with this, each week Kriebel and Cotgreave find an existing visualization / data set and ask the data visualization community to come up with alternative ways to present the same data.

As Cotgreave points out in one of his blog posts “It’s about using a tool to debate data. It’s about improving people’s data literacy.”

With one major exception that I’ll discuss in a moment the challenge is meeting its goals as each week several dozen people participate and the submissions and accompanying discussions have been enormously valuable.

But there was one week where the community failed.

Worse than that, the community did some damage that will be difficult to repair.

Bad Data Make Bad Vizzes Make Bogus Conclusions

Week four of the Makeover Monday challenge used survey data from GOBankingRates that posed the question “how much money do you have saved in your savings account?” Here are some of the baseless conclusions from people that participated in the makeover:

Figure 1

Figure 1 — From the source article that spawned the makeover.  Yes, the exploding donut needs a makeover, but it’s the headline “Survey finds that two-thirds of Americans don’t have enough money saved” that presents the bigger problem.

  • Americans Don’t Have Enough Money Saved (See link).
  • 71% of Americans Have Less than $1,000 in Savings. Yikes! (See link).
  • Americans Just Aren’t Saving Money (See link).
  • Most Americans Have Miniscule Savings (See link).
  • 80% of Americans Have Less than $5,000 in Savings! (See link).
  • Americans Are Not Saving Enough Money! (See link).
  • Americans Have Too Little Savings (See link).

So, what’s the problem?

It turns out the key finding from the original publication is not true — and thanks to the makeovers that spurious finding has been amplified dozens of times.

How did this happen?

Let’s dig into the data a little bit.

Is There a Relationship Between Age and Savings?

As I mentioned before I think the Monday Makeover Challenge is great and I’ve participated in a couple of them. I started to work on this one and took a stab at showing the relationship between age and savings, as shown here.

Figure 2 -- Divergent stacked bar chart showing the percentage of people that have different savings amount, sorted by age

Figure 2 — Divergent stacked bar chart showing the percentage of people that have different savings amounts, sorted by age

This looked odd to me as I expected to see a correlation between age and savings; that is, I expected to see a lot more blue among Seniors and Baby Boomers.

I decided to make the demarcations less granular and just compare people with minimal savings and those with $1,000 or more in savings, as shown here.

Figure 3 — Less granular divergent stacked bar chart

This result seemed way off, so either my supposition is wildly incorrect (i.e., as people get older they save more) or there was something wrong with the data.

Note: I try to remind people that Tableau isn’t just for reporting interesting findings. It’s a remarkably useful tool for finding flaws in the data.

It turns out that while there is indeed something wrong with the data, there was a much bigger problem:

Most people didn’t bother to look at the actual question the survey asked.

What the Survey Asked

The survey asked “How much money do you have saved in your savings account?”  It did not ask “How much money do you have saved?

The difference is titanic as the average American savings account yields but .06 percent interest!  That’s infinitesimal — you might as well stick your money in a mattress!

Indeed, I am of the Baby Boomer generation and I have but $20 in my savings account — but (thankfully) more in my savings.

So, the vast majority of people that participated in the makeover didn’t bother to look at the actual question and came to — and published — a bogus conclusion.

Were there any other problems with the survey?

You betcha.

What’s Wrong with the Survey?

A visualization is only as good as its underlying data and the data in question has nothing to do with the savings habits of Americans; it only has to do with having a savings account.

But there are other shortcomings with the survey that should make us question whether the data is even useful for analyzing how much money Americans have sitting in a savings account.

Consider this excellent review of the same Makeover Monday challenge from Christophe Cariou.  He points out the following shortcomings with the survey itself:

  • In the article, we read: ‘The responses are representative of the U.S. internet population’. It is therefore not representative of the US population. See this report by Pew Research Center for age and online access.
  • We also read ‘Demographic information was not available for all respondents, and analysis of responses by demographics is based solely on responses for which the targeted demographic information was available.’ Normally, if it was demographically representative, this would be clarified. This comment adds a doubt.
  • The average savings amount in the article is the sum of the averages of the groups divided by 6. It is not weighted by the size of each group.

Note: Kudos to Bridget Cogley who also saw the problems with the conclusions when the makeovers first appeared in late January 2016.

Further note: In a subsequent makeover challenge blog post Cotgreave alerted participants to questionable data.

So, Where Exactly is the Harm?

So, dozens of people have created visualizations based on bad data and came up with bogus conclusions. Given the number of articles from allegedly reliable sources reporting shortcomings in savings, what’s the harm of sounding an alarm bell?

I suppose if you are an “ends justify the means” type of person then it’s fine to publish bogus findings as long as they change behavior in a positive way.

But I know many of the people in this community and they would be aghast at using data visualization this way.

I also fear that with collective missteps like this people will question the ability of makeover participants to relay accurate information.

So What Should We Do?

Andy Cotgreave and Andy Kriebel have earned their leadership positions in the data visualization community, so I hope they will make note of this makeover mishap and encourage people that published the bogus result to modify their headlines.

I also strongly encourage anyone working in data visualization to understand the data — warts and all — before rushing to publish. Andy Kriebel is providing the data set and we shouldn’t ask him to find all the flaws in it.  Indeed, that’s part of our job.

Finally, I ask others in the community to be more diligent: only publish work that has been carefully vetted and do not tolerate unsubstantiated work from others.

While it’s true that nothing terrible will happen if more Americans open savings accounts, there may be other situations where publishing spurious conclusions will do some serious damage.

Feb 112016
 

Overview

As I’m mentioned before the number one impediment to success with visualizing survey data in Tableau is getting the data in the right format so you can do all the dazzling things you want to do.

Up until very recently none of the survey tools vendors had done anything to make things easy so you had to pursue one of the following options:

  • Use the Tableau add-in for Microsoft Excel (only works on a PC and can be very cumbersome if you have data that needs to be updated.)
  • Use the pivot feature in Tableau 9.x (works great but is missing some important functionality, especially for larger surveys.)
  • Use Alteryx (Best-in-class but it has its own learning curve and is pricey.)
  • Get somebody to do this for you (like that’s going to happen.)

But now there is another choice and it warrants its own blog post.

Say hello to the Qualtrics web connector

Survey vendor Qualtrics has developed a web data connector so that you can connect directly to your Qualtrics surveys from inside Tableau. While the connector does not have everything I would want (more on this in a moment) this offering presents a great step forward for Qualtrics users that want to better see and understand their data using Tableau.

In this blog post we’ll look at the following:

  • Wide vs. tall data (and why we need it to be tall)
  • How to connect to Qualtrics survey data using Tableau
  • How you can get the data “just so” using data blending
  • Dealing with null values
  • Benefits and possible drawbacks

Wide vs. Tall Data

Most survey tools present the data in a one-column-for-each-question format that looks something like this.

Figure 1 -- One row per respondent, one column for each question

Figure 1 — One row per respondent, one column for each question

We will be absolutely miserable working with the data in this format so we need something that can reshape the data so that we have a separate row for each question, as shown below.

Figure 2 -- Each respondent is listed many times (one row for each question)

Figure 2 — Each respondent is listed many times (one row for each question)

Actually, we want to do more than just reshape the data.  Ideally we want to get something that looks like this.

Figure 3 -- Ideal format for survey data

Figure 3 — Ideal format for survey data

Qualtrics can get us most of the way there, and with some simple data blending it can get us all of the way there.

How to connect to Qualtrics survey data

Let’s assume you have a Qualtrics ID and at least one survey with some data.

  1. Log into Qualtrics.com and access the survey to which you will want to connect.
  2. Click the Responses tab and click the Export Data button, as shown here.
    04_InsideQ
  3. When the Export Responses dialog box appears, click Tableau, then copy the URL, as shown below.
    05_QDialog
  4. Close the dialog box and open Tableau.
  5. From within Tableau indicate you want to connect to a new data source and select Web Data Connector.
    06_WebConnector
  6. When the Web Data Connector dialog box appears paste the survey URL that you copied from Qualtrics.
  7. When asked, enter your Qualtrics username and password and click Log In.
    Qualtrics will then ask you which fields you want from your survey and which fields you will want to transpose
  8. Start by indicating that you want to select all fields and transpose all the fields, as shown below.
    07_Which fields
  9. Indicate which fields you won’t need and which you will not need transposed. In this example we will want Response ID, Q1: Gender, Q2: Region and Q3: Age to be separate columns, but all other selected fields need  to be transposed (pivoted).
    08_JustThese
  10. Click Import Survey Responses and wait a little bit as Tableau imports the data and create a Table Data Extract file.

So, just what do we have here?

After specifying that we want the vast majority of questions (fields) to be transposed, here’s what we’ll see from inside Tableau.

09_Fields In Tableau

Figure 4 — Dimensions and Measures generated by the Qualtrics Web Data Connector

It may not look like much but the fact that we get both text and numeric responses to the survey questions will be a huge time saver.

Here’s what we get if we place Question ID, Question Description, Textual Answer, and Numeric Answer on the Rows shelf.  Please note that Numeric Answer has been converted to discrete so we can easily see how the text responses line up with the numeric responses.

10_Whatdowehavehere

Figure 5 — “What do we have here?” viz that shows Question ID, human-readable translation of the Question ID, all possible text responses, and all possible numeric responses.

Here I can see how each Question ID maps to a human-readable translation of the Question ID (i.e., the Question Description) as well as all possible text responses and all possible numeric responses.  Yes, there are some null values here, but we’ll deal with them in a moment.

I wish I can express just how beautiful this is! This has almost everything I could possibly want and I barely had to lift a finger.

Okay, what did I mean by “almost”?

This particular survey does not contain many questions so I could work with what I have right now without too many problems.  But if I had a survey with hundreds of questions then I would benefit greatly from having some way to group related questions together.

Fortunately I can do this easily using Tableau’s blending feature.

How you can get the data “just so” using data blending

I need to create a separate Excel file that groups the Question IDs so that I can more easily find and manipulate the data in Tableau.

11_Helpfer File

Figure 6 — Example of a “Question Helper” file.

Note:  In some cases it isn’t just convenience that warrants creating this “helper” file.  NPS analysis and Importance vs. Satisfaction analysis will be very difficult without building a scaffolding file like the one shown above.

Let’s see how to create the blend.

  1. On a new worksheet drag Question ID onto rows.
  2. Click the Add New Data Source
  3. Connect to the Helper File indicate whether or not you want to create an Extract (of course you do!)
    Notice that there is a link field on Question ID in what will be the secondary data source.
    12_LinkField
  4. Drag Grouping the right of Question ID and then Qtype to the right of Grouping as shown below.
    13_TempBlend
  5. Right-click the Grouping pill and select Create Primary Group.
  6. Rename the field Question Grouping as shown below.
    14_QuesitonGrouping
  7. Right-click the Qtype pill and select Create Primary Group.
  8. Rename the field Qtype and click OK.
    At this point we no longer need the secondary data source as we’ve created some very useful groups in the primary data source.
  9. Remove the Grouping and Qtype pills from the Rows shelf.
  10. Close the secondary data source.
  11. Drag Question Grouping and QType to the left of Question ID as shown below.
    15_PerfectMapping

Putting this to use

Let’s see how to put all this to good use; specifically, let’s see how to visualize a check-all-that-apply question.

  1. In a new sheet drag Question Grouping to the Filters shelf and select What do you measure, as shown below.
    16_Filters
  2. Drag Question Description to rows.
  3. Create a new calculated field called CheckAll_Percent and define is as follows:
    17_CheckAll_Percent
  4. Right-click the newly created measure and indicate you want to change its default numeric format to Percentage with 0 decimal places.
  5. Drag CheckAll_Percent to the Columns shelf, turn on mark labels, and sort in descending order.

    Figure 8 -- Percentage of respondents that measure Pulse Rate, Adrenaline Production, and so on.

    Figure 8 — Percentage of respondents that measure Pulse Rate, Adrenaline Production, and so on.

Dealing with nulls

As we saw earlier we have some nulls and we need to make sure they don’t muck up the visualization.

As of this writing the Qualtrics Web Connector writes a 0 as the numeric answer and a blank as a textual answer for Nulls.  I would have expected either a “99” or an actual NULL value as we cannot simply filter the 0 values as sometimes a 0 means “no”, sometimes it may mean “strongly disagree,” and so on.

Fortunately, the Qualtrics Web Connector does write an unambiguous blank that we can use as our filter.

Let’s see how to remove the null values in the visualization above.

  1. Drag Textual Answer to the Filters shelf.
  2. When the Filter dialog box appears indicate that you want to see Fewer values, as shown below.
    19_Fewer Values
  3. Indicate that you want to Exclude blanks, as shown below.
    20_Exclude

Benefits and possible drawbacks

So, just what are the benefits and potential drawbacks of using the Qualtrics web connector?

Benefits

There are two clear benefits to using the Qualtrics web connector:

  1. You can get the data the way you need it with very little effort – there’s practically no learning curve.
  2. As you get more survey responses you just need to update the Extract. The workflow is seamless.

Possible drawbacks

  1. What happens when there are many questions? It’s unclear how well this will work when you have many questions, let alone many survey responses. Navigating the connection dialog to find the fields you want and don’t want to transpose may prove cumbersome.
  2. Weighted data. For those that need it, there needs to be some mechanism in the workflow to weigh survey responses. This is something that can be automated using a tool such as Alteryx.
  3. The Qualtrics web connector is not free and the pricing will vary based on a variety of factors. Given this, there may be cases where it makes sense to sacrifice Qulatrics’ ease and seamlessness for a more powerful and flexible ETL solution.

Conclusion

If you are Qualtrics user who wants to explore and visualize data using Tableau you should definitely take the Qualtrics web connector for a test drive as Qualtrics has made the task of getting survey data into Tableau easy and seamless.

If you’re using a different survey tool and want to visualize your data with Tableau, you may want to consider switching to Qualtrics.

 

 

 

 

 Posted by on February 11, 2016 2) Visualizing Survey Data, Blog Tagged with: ,  12 Responses »
Jan 112016
 

Overview

I spend a lot of time with survey data and much of this data revolves around gauging people’s sentiments and tendencies using either a Likert Scale or a Net Promoter Score (NPS) type of thing.

Examples

Here’s an example of gauging sentiment using a 5-point Likert scale.

Indicate how satisfied you are with the following:

00_Grid1

Here’s an example of measuring tendencies, using a 4-point Likert scale.

How often do you use the following learning modalities?

00_Grid2

So, what’s a good way to visualize responses to these types of questions?

Over the past ten years I’ve spent thousands of hours working on the best ways to show how opinion and tendencies skew one way or another.  I have found that in most cases a divergent stacked bar chart helps me (and more importantly, my clients) best see what’s going on with the survey responses.

In this blog posts we’ll

  • See an example of a divergent stacked bar chart (also called a staggered stacked bar chart)
  • Work through a data visualization improvement process
  • Show how to visualize different scales (e.g., NPS, Top 3/Bottom 3, 5-point Likert, etc.)
  • Show sentiment and tendencies over time
  • Present a dashboard that will allow you to experiment with different visualization approaches

Note: for step-by-step instructions on how to build a Likert-scale divergent stacked bar chart in Tableau, click here.

Divergent Stacked Bar vs. 100% Stacked Bar

Readers of my newsletter and folks visiting the web site may have seen my redesign of a New York Times infographic that showed the tendencies of politicians to lie or tell the truth.  Here’s the 100% Stacked Bar chart that appeared in the New York Times.

Figure 1 -- 100% stacked bar chart.

Figure 1 — 100% stacked bar chart.

Here’s the redesign using a divergent stacked bar chart.

Figure 2 -- Divergent stacked bar chart.

Figure 2 — Divergent stacked bar chart.

With both the 100% stacked bar chart and the divergent stacked bar charts the overall length of the bars is the same, but with the divergent approach the bars are shifted left or right to show which way a candidate leans. I, and others I’ve polled, find that shifting the bars makes the chart easier to understand.

How We Got Here — Likert Scale Improvement Process

Consider the table below that shows the results from a fictitious poll on the use of various learning modalities.

Figure 3 -- Table with survey results.

Figure 3 — Survey results in a table.

I can’t glean anything meaningful from this.

What about a bar chart?

Figure 4 -- Likert scale questions using a bar chart. Yikes.

Figure 4 — Likert scale questions using a bar chart. Yikes.

Wow, that’s really bad.

What about a 100% stacked bar chart?

Figure 5 -- 100% stacked bar chart using default colors.

Figure 5 — 100% stacked bar chart using default colors.

Okay, that’s better, but it’s still pretty bad as Tableau’s default colors do nothing to help us see tendencies that are adjacent. That is, “Often” and “Sometimes” should have similar colors, as should “Rarely” and “Never.”

So, let’s try using better colors…

(…and don’t even think about using red and green.)

Figure 6 -- 100% stacked bar chart using a more appropriate color scheme.

Figure 6 — 100% stacked bar chart using a more appropriate color scheme.

This is certainly an improvement, but the modalities are listed alphabetically and not by how often they’re used. Let’s see what happens when we sort the bars.

Figure 7 -- Sorted 100% stacked bar chart with good colors.

Figure 7 — Sorted 100% stacked bar chart with good colors.

It’s taken us several tries, but it’s now easier to see which modalities are more popular.

But we can do better.

Here’s the same data rendered as a divergent stacked bar chart.

Figure 8 -- Sorted divergent stacked bar chart with good colors.

Figure 8 — Sorted divergent stacked bar chart with good colors.

Of course, we can also look take a coarser view and just compare Sometimes/Often with Rarely/Never, as shown here.

Figure 9 – Divergent stacked bar chart with only two levels of sentiment.

Figure 9 – Divergent stacked bar chart with only two levels of sentiment.

I find that the divergent approach “speaks” to me and it resonates with my colleagues and clients.

Experiments using Different Scales

A while back Helen Lindsey was kind enough to send me some data that contained responses to some Net Promoter Score questions.  Specifically, folks were asked to rate companies/products on a 0 to 10 or 1 to 10 scale.

Figure 10 -- The classic Net Promoter Score (NPS) question

Figure 10 — The classic Net Promoter Score (NPS) question

We compute NPS by subtracting the percentage of folks that are promoters (i.e., people who responded with a 9 or a 10), subtracting the percentage of folks that are detractors (i.e., people who responded with a 0 through 6) and multiplying by 100.

But sometimes my clients have questions that are on a 10 or 11-point scale but instead want to compute the percentage of folks that responded with one of the top three boxes minus the percentage of folks that responded with the bottom three boxes.

I realized that the Lindsey data set could provide a type of “sandbox” where we could experiment with different sentiment scales including NPS, Top 3 minus Bottom 3, 5-point Likert, 3-point Likert, and 2-point Likert.

Let’s look at the results of some of these experiments.

NPS

Here are two ways we can visualize NPS data.  The first shows the percentages of people that fall into the three categories.

Figure 11 -- NPS showing percentages

Figure 11 — NPS showing percentages

Here’s the same view, but with the NPS score superimposed over the divergent stacked bars.

Figure 12 -- NPS with score superimposed

Figure 12 — NPS with score superimposed

NPS over Time

It turns out that divergent stacked bars are great at showing NPS trends over time.  Here’s a view using percentages.

Figure 13 -- Divergent stacked bar showing NPS over time with percentages

Figure 13 — Divergent stacked bar showing NPS over time with percentages

Here’s the same view but with the score superimposed.

Figure 14 -- Divergent stacked bar showing NPS over time with scores

Figure 14 — Divergent stacked bar showing NPS over time with scores

Note – for some other interesting treatments of showing sentiment over time, see Joe Mako’s visualization on banker honesty.

Net = Top 3 minus Bottom 3

Let’s take the same data but divide it into the following buckets:

  • Positive = Top 3 Boxes
  • Neutral = Middle 4 Boxes
  • Negative = Bottom 3 Boxes

Here are the associated visualizations.

Figure 15 -- Top 3 / Bottom 3 showing with percentages

Figure 15 — Top 3/Bottom 3 showing with percentages

Figure 16 -- Top 3 / Bottom 3 with scores

Figure 16 — Top 3/Bottom 3 with scores

Five, Three, and Two-Point Likert Scale Renderings

Let’s suppose that instead of asking a questions on a 1 through 10 scale we instead asked folks to select one of the following five responses:

  • Strongly disagree
  • Disagree
  • Neutral
  • Agree
  • Strongly agree

Here’s the same NPS data but rendered using a five-point Likert scale.

Figure 17 -- Divergent stacked bar chart showing all responses

Figure 17 — Divergent stacked bar chart showing all responses

And here’s the same data, but divided into positive, neutral, and negative sentiments (3-point Likert).

Figure 18 -- Divergent stacked bar showing positive, neutral, and negative

Figure 18 — Divergent stacked bar showing positive, neutral, and negative

Finally, here’s the same data, but only showing positive and negative sentiments (2-point Likert).

Figure 19 -- Divergent stacked bar showing just positive and negative

Figure 19 — Divergent stacked bar showing just positive and negative

Try it yourself

Below you will find a dashboard that allows you to explore different combinations of the 1 to 10 scale.

I strongly recommend you do NOT give your audience all these scaling options;  these are here for you to experiment and see how the visualizations and ranking change based on what scales you use.  The only option I would present to your audience is the ability to toggle back and forth between percentages and scores.

Nov 222015
 

Overview

I received an e-mail inquiry about weighted data recently and realized that while I cover this in my survey data class I had not yet posted anything about it here.  Time to remedy that.

The good news is that it is not at all difficult to work with weighted survey data in Tableau.  And just what do I mean by weighted data? We use weighting to adjust the results of a study so that the results better reflect what is known about the population. For example, if the subscribers to your magazine are 60% female but the people that take your survey are only 45% female you should weigh the responses from females more heavily than males.

To do this each survey respondent should have a weighting amount associated with their respondent ID, as shown here.

Figure 1 – A snippet of survey data showing a separate column for Weight.

Figure 1 – A snippet of survey data showing a separate column for Weight.

When pivoting / reshaping the data make sure that [Weight] does not get reshaped.  It should remain in its own column like the other demographic data.

Once this is in place we’ll need to modify the formulas for the following questions types:

  • Yes / No / Maybe (single punch)
  • Check-all-that-apply (multi-punch)
  • Sentiment / Likert Scale (simple stacked bar)
  • Sentiment / Likert Scale (divergent stacked bar)

Yes / No / Maybe (single punch)

With this type of question you usually want to determine the percentage of the total.

02_YesNoMaybe

Figure 2 — Visualization of a single-punch question

Unweighted calculation

The table calculation to determine the percentage of total using unweighted data is

   SUM([Number of Records]) / TOTAL(SUM([Number of Records]))

Weighted calculation

The table calculation to determine the percentage of total using weighted data is

   SUM([Weight]) / TOTAL(SUM([Weight]))

Check-all-that-apply (multi punch)

With this type of question you usually want to determine the percentage of people that selected an item.  The total will almost always add up to more than 100% as you are allowing people to select multiple items.

Figure 3 -- Visualization of a multi-punch question

Figure 3 — Visualization of a multi-punch question

Most surveys will code the items that are checked with a “1” and those that are not checked with a “0”.

Unweighted calculation

The calculation to determine the percentage of people selecting an item using unweighted data is

   SUM([Value]) / SUM([Number of Records])

where [Value] is the name of the measure that contains the survey responses.  If the survey responses are coded as labels instead of numbers you can use this formula instead.

   SUM(IF [Label]="Yes" then 1 ELSE 0 END) / SUM([Number of Records])

Weighted calculation

The calculation to determine the percentage of people selecting an item using weighted data is

   SUM(IF [Value]=1 then [Weight] ELSE 0 END) / SUM([Weight])

Sentiment / Likert Scale (simple stacked bar)

This is very similar to the single-punch question but instead we have several questions and compare them using a stacked bar chart.  I am not a big fan of this approach but it can be useful when you superimpose some type of score (e.g., average Likert value, percent top 2 boxes, etc.).

Figure 4 -- Simple Likert Scale visualization

Figure 4 — Simple Likert Scale visualization

Figure 5 -- Simple Likert Scale visualization with Percent Top 2 Boxes combo chart

Figure 5 — Simple Likert Scale visualization with Percent Top 2 Boxes combo chart

Unweighted calculation – Stacked Bar

The table calculation to determine the percentage of total using unweighted data is

   SUM([Number of Records]) / TOTAL(SUM([Number of Records]))

Weighted calculation – Stacked Bar

The table calculation to determine the percentage of total using weighted data is

   SUM([Weight]) / TOTAL(SUM([Weight]))

Unweighted calculation – Percent Top 2 Boxes

Assuming a 1 through 5 Likert scale, the calculation to determine the percentage of people selecting either Very high degree or High Degree (top 2 boxes) using unweighted data is

   SUM(IF [Value]>=4 then 1 ELSE 0) / SUM([Number of Records])

Weighted calculation – Percent Top 2 Boxes

Assuming a 1 through 5 Likert scale, The calculation to determine the percentage of people selecting either Very high degree or High Degree (top 2 boxes) using weighted data is

   SUM(IF [Value]>=4 then [Weight] ELSE 0) / SUM([Weight])

Sentiment / Likert Scale (divergent stacked bar)

Here is what I believe is a preferable way to show how sentiment skews across different questions.

Figure 6 -- A divergent stacked bar chart

Figure 6 — A divergent stacked bar chart

I’ve covered how to build this type of chart using unweighted values here.

There are six fields we need to fashion the visualization, three of which need to be modified to make the visualization work with weighted data.

  • Count Negative
  • Gantt Percent
  • Gantt Start
  • Percentage
  • Total Count
  • Total Count Negative

Count Negative – Unweighted

Assuming a 1 – 5 Likert scale, the calculation to determine the number of negative sentiment responses using unweighted data is

   IF [Score]<3 THEN 1
   ELSEIF [Score]=3 THEN .5
   ELSE 0 END

Count Negative – Weighted

Assuming a 1 – 5 Likert scale, the calculation to determine the number of negative sentiment responses using weighted data is

   IF [Score]<3 THEN [Weight]
   ELSEIF [Score]=3 THEN .5 * [Weight]
   ELSE 0 END

Percentage – Unweighted

The calculation that determines both the size of the Gantt bar and the label for the bar using unweighted data is

   SUM([Number of Records])/[Total Count]

Percentage – Weighted

The calculation that determines both the size of the Gantt bar and the label for the bar using weighted data is

   SUM([Weight])/[Total Count]

Total Count – Unweighted

The calculation that determines the total number of responses for a particular question for unweighted data is

   TOTAL(SUM([Number of Records]))

Total Count – Weighted

The calculation that determines the total number of responses for a particular question for weighted data is

   TOTAL(SUM([Weight]))

Summary

Here’s a summary of all the unweighted calculations and their weighted equivalents

Unweighted Weighted
SUM([Number of Records]) / TOTAL(SUM([Number of Records])) SUM([Weight]) / TOTAL(SUM([Weight]))
SUM([Value]) / SUM([Number of Records]) SUM(IF [Value]=1 then [Weight] ELSE 0 END) / SUM([Weight])
SUM(IF [Value]>=4 then 1 ELSE 0) / SUM([Number of Records]) SUM(IF [Value]>=4 then [Weight] ELSE 0) / SUM([Weight])
IF [Score]<3 THEN 1 ELSEIF [Score]=3 THEN .5 ELSE 0 END IF [Score]<3 THEN [Weight] ELSEIF [Score]=3 THEN .5 * [Weight] ELSE 0 END
SUM([Number of Records])/[Total Count] SUM([Weight])/[Total Count]
TOTAL(SUM([Number of Records])) TOTAL(SUM([Weight]))