Nov 282016
 

Note: Major thanks to Nazirah Garrison and Christie Clark at Tableau for suggesting this approach.

Overview

With Tableau 10.x it is in fact possible to get your survey data, “just so” without having to invest in new tools and / or a engage in a time-consuming, error-prone procedure every time you need receive updated survey data.

There’s a lot of upside to this approach — everything is built into Tableau and you’ll just need to refresh the extracts (there will be several of them) as you get new data.  The downside is the setup is a little bit cumbersome and Tableau manifests some “hmm, that doesn’t seem right” behavior along the way.

Before embarking on this I encourage you read this post so you understand what I mean when I refer to the data being “just so.”

Also, if you would like to follow along you can download the source data here.

Three files and three extracts

Anyone that has read my posts or attended my classes know that I want survey responses in both text and numeric formats. Sure, you can in fact manage with one format or the other, but you’re just creating a LOT more work for yourself if you don’t have the data in both formats.

You will also need what I call a “Helper” file — this is just a separate file that maps each question ID into human readable form and groups related questions together. Again, you can certainly get by without it but you’ll be working much harder than you need to, especially if you ever compare “Importance” with “Satisfaction” Likert-scale questions.

For this example we will create a separate extract for each of these three files, then use Tableau 10.x’s ability to join three different data sources.

Let’s start by creating the extracts.

To pivot the data labels and create an extract

  1. Start Tableau 10.x and indicate you want to connect to Excel.
  2. Connect to DR_SurveySampleData_SourceFiles_Fall2016.xlsx and drag the sheet Data Labels into the Drag sheets here area as shown below.
    01_draglabels
  3. Leave the first five columns intact (the Resp ID, demographic stuff, and Weight) and select all the other columns.
  4. Click any of the selected columns and select Pivot from the context menu.
    02_firstpivot
  5. Rename the first pivoted column Question ID and the second column Label as shown below.
    03_rename
  6. Indicate that you want to create an Extract (look for the radio button towards the upper right of the screen) and then click a sheet in your workbook to generate the Extract.
  7. When asked to save, name the file DR_JustSo_Labels.tde (make sure to note where you are saving the file).

One down, two to go.

To pivot the data numbers and create an extract

  1. Click the New Data Source icon and indicate you want to connect to an Excel file.
  2. As before, connect to DR_SurveySampleData_SourceFiles_Fall2016.xlsx, but this time drag Data Numbers to the Drag sheet here area.
  3. Hide the columns labeled Gender, Location, Generation, and Weight — we already have them in the other data source and don’t need them twice.
  4. Leaving Resp ID in place, select the second through the last columns.
  5. Click the down arrow on any of the selected columns and select Pivot from the context menu.
  6. Rename the first pivoted column Question ID and the second column Value, as shown below.
    04_secondpivot
  7. Indicate that you want to create an Extract (look for the radio button towards the upper right of the screen) and then click a sheet in your workbook to generate the Extract.
  8. When asked to save, name the file DR_JustSo_Numbers.tde.

Two down, one to go.

To create the question helper extract

  1. Click the New Data Source icon and indicate you want to connect to an Excel file.
  2. As before, connect to DR_SurveySampleData_SourceFiles_Fall2016.xlsx, but this time drag Question Helper to the Drag sheet here area.
  3. Indicate that you want to create an Extract (look for the radio button towards the upper right of the screen) and then click a sheet in your workbook to generate the Extract.
  4. When asked to save, name the file DR_QuestionHelper.tde.

All three data sources are now ready.

Joining the three data sources together

We now have our three data sources as separate Tableau extract files. We’ll combine these three files (and create an extract from the joined files) using Tableau 10.x’s ability to join files from different data sources.

Note: This is where we’ll encounter Tableau’s “head-scratching” behavior.

To join the three data sources

Note: In first trying this Tableau presented a lot of warning messages about not being able to materialize a temporary table. While I could ignore these warnings and muddle through, you may not be so lucky. It turns out the culprit was my anti-virus software. I temporarily disabled it and everything worked without a hitch.  See http://kb.tableau.com/articles/issue/error-unable-to-materialize-temporary-table-joining-data-sources.

  1. Click the New Data Source icon and indicate you want to connect to More, as shown below.
    05_more
  2. Select DR_JustSo_Labels.tde and click Open.
  3. Click Add, as shown below.
    06_add
  4. Click More and then select DR_JustSo_Numbers.tde. Do not be fooled, the correct fields have NOT yet been joined.
  5. Click the overlapping Venn diagram to display the Join dialog box, as shown below.
    07_join1
  6. Click Number of Records and then click the X that appears in the row to indicate you do NOT want to join these two data sources using this field.
  7. From the left data source indicate you want to join using Resp ID, as shown below.
    08_join2
  8. From the right data source indicate that you want to join using RespID (Extract1). I have no idea why the field is named this way. More on this in a moment.
  9. From the left data source indicate that you want to join using Question ID. Yes, you need to join on more than one field.
  10. From the right data source indicate you want to join using Pivot Field Names (Extract1). Okay… THIS is the thing that has me scratching my head and as of this writing (November 27, 2016) I have no idea why the field isn’t also called Question ID.

    January 4, 2017 — Now I know why this is happening. When using data extracts (.TDE files) Tableau is only able to keep track of the alias names for fields in the first .TDE file.  All of that info gets stripped out from the second .TDE file and Tableau just sees the original field name (Pivot Field Names). A little off-putting, yes, easy to address and it won’t matter a lick once we’re building our visualizations.

  11. Click Add again, select More, and select DR_QuestionHelper.tde.
  12. Click the second Venn overlapping circle and remove any of the joins that may be in there (most likely again using Number of Records).
  13. From the left data source select Question ID and from the right data source select QuestionID as shown below.
    09_join3
  14. Hide the fields RespID (Extract1), both Number of Records fields, the second Question ID, and Pivot Field Names, and rename Pivot Field Values to Value. Your screen should look like this.
    10_joinall
  15. Indicate that you want to create an Extract (look for the radio button towards the upper right of the screen) and then click a sheet in your workbook to generate the Extract.
  16. When asked to save, name the file DR_JustSoAll.tde. After creating the extract Tableau will show all the field names grouped by data source, as shown below.
    11_datasource
  17. Click the down arrow and select Group by Folder.
    Okay, you don’t have to do this but I see no reason to group the fields by data source.

Congratulations, you now have your data “just so” and you did it all in Tableau.

So, how do you get the extracts to refresh?

Good question.  If you are using either Tableau Server or Online you can create shared data sources and program the extract to refresh on a regular or as-needed basis.

If you are using desktop — and you have all four data sources in in one workbook — you can just click the  Data menu and select Refresh All Extracts.

Conclusion

While I find the process outlined here both cumbersome and confusing (what is up with those renamed fields NOT staying renamed?) this approach does appear to work and you only need to set it up once. The same cannot be said of The Tableau add-in for Excel which requires a lot of manual intervention every time you want to update the data.

Will this replace Alteryx as my tool of choice? No, but it does work and you can’t beat the price.

 Posted by on November 28, 2016 2) Visualizing Survey Data, Blog Tagged with: , , , ,  50 Responses »
Oct 172016
 

Overview

So, you’ve created a wonderful collection of survey data dashboards that have innumerable demographic filters so that users can, for example, just see responses from left-handed Los Angeles Lakers fans between the ages of 34 and 39.

So, what’s the problem?

Actually, there are two problems. The first occurs when extreme filtering reduces the number of responses so much that the results are statistically meaningless. The second is that you may inadvertently allow people to “glean” who has answered a survey.  For example, if you conduct a salary survey you want participants to be assured that nobody will be able to see individual responses. But if you have too many filters it may be possible to winnow down the results so you can guess who provided the answer.

Fortunately, it’s easy to set up a graceful way to suppress a chart and display an error message in its place when the “n” count gets too low.

How it works

Consider the dashboard shown in Figure 1.  Notice that the upper right corner shows that with nothing filtered there are a total of 350 responses.

01_checkall

Figure 1 — A simple dashboard showing results for a check-all-that-apply question.

Also notice there’s a parameter control that allows you to specify the cut-off point for displaying the visualization.

Now let’s see what happens if we use the filters to winnow down the number of responses to the point that there are fewer than 20 (Figure 2.)

Figure 2 -- With too few responses the bar chart is suppressed and the warning message is displayed.

Figure 2 — With too few responses the bar chart is suppressed and the warning message is displayed.

What’s happening here is that there are two floating charts both with similar filters that looks at how many survey responses there are. The bar chart in Figure 1 is set to appear if the number of responses is greater than or equal to 20. The warning message (it’s just a Tableau worksheet) appears if the number of responses is fewer than 20.

How the filters work

Let’s look first at what drives the bar chart (Figure 3.)

Figure 3 -- Pill settings and filters for the bar chart.

Figure 3 — Pill settings and filters for the bar chart.

Notice in particular there is a field called [Minimum Count] that is on Filters card and that it is set to True. The field [Minimum Count] is defined as follows:

Figure 4 -- How [Minimum Count] is defined.

Figure 4 — How [Minimum Count] is defined.

Here [Count Threshold] is the fill-in-the-blank parameter (currently set to 20).

So, the visualization will only appear if there are at least 20 responses; otherwise the filter “kills” the viz and the only thing we see is the title.

Now, how does the secondary visualization work?  Let’s have a look (Figure 5.)

Figure 5 -- Pill and filter settings for the "warning" visualization.

Figure 5 — Pill and filter settings for the “warning” visualization.

Notice that [Minimum Count] is also on the Filter card but is set to False. We’re seeing the viz (the red message) because the filters in place result in fewer than 20 responses.

And just what is producing the message? It’s the field [Too Few] that’s been placed on the Text button on the Marks card. The field is defined as follows.

Figure 6 -- Definition of the field [Too Few].

Figure 6 — Definition of the field [Too Few].

Why use floating elements?

We certainly could cram the two visualizations into a container and make sure that when one is displayed the other only takes up a few pixels.  I elected to go with the floating approach but made sure that the secondary viz was set to be in back of the primary viz.

Is that all there is to it?

For a check-all-that-apply question, that’s all you need to know, but some question and visualization types may need different approaches.

Consider Figure 7 where we see a jitterplot comparing salary data for men and women where each dot represents a response from an individual survey participant.

Figure 7 -- Pill and filter configuration for a jitterplot visualizing salary data.

Figure 7 — Pill and filter configuration for a jitterplot visualizing salary data.

Notice that we have a different field (one that uses a Table calculation) on the Filters card.

There’s also a very different setup to display the warning message for this visualization as we cannot simply base this on SUM([Number of Records]).

I will leave it to the reader to explore how this these are set up.  Just let me know when you come up with a better approach.

Conclusion

If you’re visualizing survey data and giving users filters you should come up with a game plan for what to do if there are too few responses. In this blog post (and the embedded, downloadable workbook) I present two approaches for two types of survey questions. Other questions types (for example, Likert-scale questions) will need some modifications to what I’ve presented here.

Sep 072016
 

Overview

TruthfulArtImagine a terrific introductory college course presented by a terrific professor.

That’s the feeling I had in reading The Truthful Art, Alberto Cairo’s follow up to his first book The Functional Art.

Whereas his first book took a “look at what you can and should do” approach to help people see and understand data, The Truthful Art is more of a “here’s what you need to know” if you want to be a data journalist — and there’s a lot of things you need to know if you are going to do a proper job.

I’m reluctant to use the term “data journalism” as Cairo’s book is for anyone that that is tasked with helping people make sense of data. The difference is that the data journalist’s work is likely to be public and yours may only be seen by people working in your organization. But while you may not have to make a dashboard that is as polished as an infographic from the New York Times, both you and the data journalist need to adhere to a particular doctrine and have sufficient skills across a wide range of topics if you are going to build functional, truthful, and meaningful visualizations.

First, Be Truthful

If the credo for doctors is to “first, do no harm” Cairo might argue that the credo for data journalists is to “first, be truthful.” Cairo makes the case that a good visualization must be

  • Truthful
  • Functional
  • Beautiful
  • Insightful
  • Enlightening

And it must be these things in this order of priority. That is, the visualization must first be “relevant, factual, and accurate” and only then should it be “accessible and engaging.” Cairo further states that “honesty, clarity, and depth come first.” Indeed, this is why he bristles with outrage over deceitful graphics like this one.

So, how, exactly, does one create something that is truthful, functional, beautiful, insightful, and enlightening?

By achieving a sufficient level of competence in a LOT of different areas.

And just what are those areas?

The Data Journalism Landscape

In reading The Truthful Art you may feel like you are in a helicopter several thousand feet above the data visualization landscape. In each section Cairo, as expert guide, will gently descend to several hundred feet above a particular area and allow you to examine varied topics including design, statistics, color, storytelling, psychology, and ethics. While the book never gets deep into any of these subjects Cairo does provide excellent resources for anyone interested in exploring a particular topic in depth as every chapter of the book ends with a section titled “To Learn More.”

While Cairo’s writing is disarmingly warm and engaging he takes the responsibility of data storytelling very seriously. By the end of the book you will have an excellent understanding of the investment needed to make a worthwhile contribution to your company, society, or both.

Conclusion

Whether you are new to the field or have been practicing for years, I’m confident you’ll find The Truthful Art, like its predecessor, to be fun, elucidating, and inspiring.

The Truthful Art

Paperback: 400 pages

Publisher: New Riders; 1 edition (February 28, 2016)

Aug 112016
 

Overview

As readers of this blog know, I have my problems with donut charts.

That said, I acknowledge that they can be cool and, under certain circumstances, enormously useful.

On a recent flight I was struck by how much I liked the animated “estimated time to arrival” donut chart that appeared on my personal TV screen. An example of such a chart is shown in Figure 1.

Figure 1 -- Donut chart showing progress towards completion of a flight.

Figure 1 — Donut chart showing progress towards completion of a flight.

I find this image very attractive and very easy to understand — I can see that I’m almost three-quarters of the way to my destination and that there are only 49 minutes left to the flight.

So, given how clear and cool this is, why not use them on a dashboard?  And if one is good, why not use lots of them?

It’s the “more than one” situation that may lead to problems.

Trying to make comparisons with donut charts

The flight status chart works because it shows only one thing only: a single item’s progress towards a goal.

Let’s see what happens when we want to compare more than one item.

Consider the chart in Figure 2 that shows the placement rates for Fremontia Academy.

Figure 2 -- Donut chart showing placement percentage.

Figure 2 — Donut chart showing placement percentage.

A 95% placement percentage is really impressive.  Is that better than other institutions?  If so, how much better is it?

Figure 3 shows a comparison among three different institutions using three different donut charts.

03_3Donuts

Figure 3  — Three donut charts displaying placement percentages for three different institutions.

Before digging deeper let’s replace the three separate donuts with a donut-within-a-donut-within-a donut chart (Figure 4.)

Figure 4  -- A concentric donut chart (also called a “radial bar chart” or a “pie gauge.”)

Figure 4  — A concentric donut chart (also called a “radial bar chart” or a “pie gauge.”)

“What’s the problem?” you may ask, “these comparisons are easy.”

While you may be able to make the comparisons you are in fact working consierably harder than you need to be.

Really.  Let me prove it to you.

Let’s suppose you wanted to compare the heights of three famous buildings: One World Trade Center, The Empire State Building, and The Chrysler Building (Figure 5).

Figure 5  -- Comparing the size (in feet) of three large buildings.

Figure 5  — Comparing the size (in feet) of three large buildings.

Now that’s an easy comparison. With virtually no effort we can see that One World Trade Center (blue) is almost twice as tall as The Chrysler Building (red).

Now let’s see how easy the comparison is with donuts (Figure 6.)

Figure 6  -- Three large buildings twisted into semi-circles.

Figure 6  — Three large buildings twisted into semi-circles.

Here are the same buildings rendered using a concentric donut chart (Figure 7).

Figure 7  -- Three skyscrapers spooning.

Figure 7  — Three skyscrapers spooning.

Yikes.

So, with this somewhat contrived but hopefully memorable  example we took something that was simple to compare (the silhouettes of buildings) and contorted them into difficult-to-compare semi-circles.

With this in mind, let’s revisit the Placement example we saw in Figure 3.

Here is the same data rendered using a bar chart.

Figure 8 -- Placement percentage comparison using a bar chart.

Figure 8 — Placement percentage comparison using a bar chart.

The comparison is much easier with the bars than with the donuts / semi-circles. You can tell with practically no effort that the blue bar is approximately twice as long as the red bar, even without looking at the numbers.

Indeed, that’s a really good test of how clear your visualization is: can you compare magnitude if the numbers are hidden?

Pop quiz — how much larger is the orange segment compared to the red segment?

Figure 9 -- Trying to compare the length of donut segments is difficult.

Figure 9 — Trying to compare the length of donut segments is difficult.

Now try to answer the same question with a “boring” bar chart.

Figure 10 -- Comparing the length of bars is easy.

Figure 10 — Comparing the length of bars is easy.

With the circle segments you are squinting and guessing while with the bars you know immediately: the orange bar is twice as large as the red bar.

More downsides for donuts

In addition to comparisons being difficult, how would you handle a situation where you exceeded a goal?  For example, how do you show a salesperson beating his / her quota?  With a bar chart you can show the bar going beyond the goal line (Figure 11).

Figure 11 -- With a bar chart it's easy to show more than 100% of goal.

Figure 11 — With a bar chart it’s easy to show more than 100% of goal.

How do you show this with a donut chart?

Rhetorical question.  You can’t.

Conclusion

If you only have to show progress towards a single goal and don’t need to make a comparison then it’s fine to use a donut chart. If you need anything more complex you should use a bar chart as it will be much easier for you and your users to understand the data.

Special thanks to Eric Kim for creating the building images.

 

Jun 222016
 

Overview

My obsession with finding the best way to visualize data will often infiltrate my dreams. In my slumbers I find myself dragging Tableau pills in an ongoing pursuit to come up with the ideal dashboard that shines light on whatever data set has invaded my psyche.

But is the pursuit of the perfect dashboard folly?

Probably, as I’ll explain in a minute, but I don’t want to suggest anyone not at least try for the clearest, most insightful and most enlightening way to display information.

Is this way is the best way?

This pursuit of the ideal chart preoccupies a lot of people in the data visualization community. Consider this open discussion between Stephen Few and Cole Nussbaumer Knafflic that transpired earlier this year.

As you will read, Few weighs in on Knaflic’s book Storytelling with Data and her use of 100% stacked bar charts.  He cited this particular example.

Figure 1 -- Knafflic's 100% stacked bar

Figure 1 — Knafflic’s 100% stacked bar

Few argued that there was a better approach and that would be to have a line chart with a separate line for each goal state.

Figure 2 -- Few's line chart

Figure 2 — Few’s line chart

Having written about visualizing sentiment and proclivities, I chimed in suggesting that a divergent stacked bar chart would be better (see Figure 3.) I think this presents a clearer and more flexible approach, especially if you have more than three categories to compare as the 100% stacked bar chart and line chart can become difficult to read.

Figure 3 -- My divergent stacked bar chart

Figure 3 — My divergent stacked bar chart

The ongoing public discussion was engaging and congenial but I’ve seen similar cases where one or more of the parties advocating a solution become so certain that his / her approach is without a shadow of a doubt the only right way to present the data that tempers flare high. Indeed, I’ve seen instances where some well-respected authors have declared a type of “Sharia Law” of data visualization and have banned so-called heretics and dilettantes from leaving comments on blogs and even following on Twitter!

My take? While I prefer the divergent stacked bar, the real question is whether the intended audience can see and understand the data. In this case, if management cannot tell from any of the three charts that there was a problem that started in Q3 2014 and continued for each quarter, then that company has some serious issues.

In other words, if the people that need to “get” it can in fact make comparisons, see what is important, and make good decisions on their new-found understanding of the data  — all without having to work unnecessarily hard to decode the chart — then you have succeeded.

I’m not saying don’t strive to be as efficient , clear, and engaging as possible, it’s just that the goal shouldn’t be to make the perfect chart; it should be to inform and enlighten.

And in this case I think all three approaches will more than suffice.  So stop arguing.

Understanding and educating your audience

Earlier this year I got a big kick out of something that Alberto Cairo retweeted:

Figure 4 -- Avoid Xenographphobia: The fear of unusual graphics / foreign chart types.

Figure 4 — Avoid Xenographphobia: The fear of unusual graphics / foreign chart types.

Xenographphobia! What a wonderful neologism meaning “fear of unusual graphics.”

So, why do I bring this up? While it’s critical to know your audience and not overwhelm them with unnecessary complexity, you should not be afraid to educate them as well. I’ve heard far too often people proclaim “oh, our executive team will never understand that chart.”

Really? Is the chart so complex or the executive so close-minded that they won’t invest a little bit of time getting up to speed with an approach that may be new, but very worthwhile?

I remember the first time I saw a bullet chart (a Stephen Few creation) and thought “what is this nonsense?”  It turns out it wasn’t, and isn’t, nonsense.  It took all of 60 seconds for somebody to explain how the chart worked and I immediately saw how valuable it was.

Figure 5 -- A bullet chart, explained.

Figure 5 — A bullet chart, explained.

I had a similar reaction when I first heard about jump plots from Tom VanBuskirk and Chris DeMartini. My thoughts at the time were “oooh… curvy lines.  I love curvy lines! But I suspect this is a case where the chart is too much decoration and not enough information. I bet there are better, simpler ways to present the data.”

Figure 6 -- Jump plot example. Yes, these are very decorative, but they are also wickedly informative.

Figure 6 — Jump plot example. Yes, these are very decorative, but they are also wickedly informative.

Then I spent some time looking into the use cases and came to the conclusion that for those particular situations jump plots and jump lines worked really well.

That said, there are some novel charts that I don’t think I will ever endorse, with the pie gauge being at the top of my list.

Figure 7 -- The pie gauge, aka, a donut chart within a donut chart, aka, stacked donut chart. I won't go into the use case here but a bullet chart is a much better choice.

Figure 7 — The pie gauge, aka, a donut chart within a donut chart, aka, stacked donut chart. I won’t go into the use case here but a bullet chart is a much better choice.

So, what should we do?

I’ve argued that you should always try to make it as easy as possible for people to understand the data but you should not go crazy trying to make the “perfect dashboard.”

I also argue that that while you should understand the skillset and mindset of your audience, you should not be afraid to educate them on new chart types, especially if it’s a “learn once, use over and over” type of situation.

But what about aesthetics, engagement, and interactivity? What roles do these play?  Is there a set of guidelines or framework we should follow in crafting visualizations?

Alberto Cairo, in his book The Truthful Art, suggests such a framework based on five key qualities.

I plan to write about these qualities (and the book) soon.

May 102016
 

Note: Special thanks to Joe Mako who came up with a considerably more elegant and extensible solution than I had first built. I describe Joe’s approach in this post.

Overview

I’m fortunate that when I have to work with survey data that is not well-coded I can take out my trusty copy of Alteryx Designer and get my data “just so.”

But what if you don’t have a copy of Alteryx? Or what if you want to use Qualtrics’ new Web Data Connector and don’t have the ability to insert a data prep tool within your workflow?

In this blog post we’ll see how you can get Tableau to address the problem.  Specifically, we’ll see how to work with check-all-that-apply questions where the data has been coded as 1s and blanks instead of 1s and 0s.

Note: While the example shown focuses on Qualtrics data the approach will work with any tool that leaves blanks for respondents that did not select an item (e.g., Survey Gizmo).

So, What’s Wrong with Blanks?

Consider the data snippet below that shows responses to a group of check-all-that-apply questions. This is what I consider to be badly-coded data.

Figure 1 -- Check all that apply questions as they might appear in a CSV file downloaded from Qualtrics

Figure 1 — Check all that apply questions as they might appear in a CSV file downloaded from Qualtrics

Now here’s how I would prefer the data to be coded.

Figure 2 -- Correctly-coded check all that apply questions

Figure 2 — Correctly-coded check all that apply questions

So, why is this better?

By having 1s and 0s we can see, easily, if some respondents didn’t answer the question group at all. In the figure below we can see that respondents 1002, 1006, 1011, and 1016 did not make any selection for any of the check-all-that-apply questions.  More importantly, with data coded with 1s and 0s, Tableau we can just filter out the Nulls and we’ll get accurate results.

 Figure 3 -- Respondents that didn't answer this question group are highlighted in yellow


Figure 3 — Respondents that didn’t answer this question group are highlighted in yellow

IMPORTANT: Before going any further, make note that respondents 1002, 1006, 1011, and 1016 did not make any selections and that of the 13 people that actually responded to Q1_4,  12 selected that option.  There will be a test.

So, given our constraints, how are we going to come up with a way to distinguish between a blank that means a respondent didn’t select a particular option and a blank that, when combined with three other blanks, means the person didn’t respond to the group of questions at all?

Before we plow into this I want to make sure people realize that we will soon pivot / reshape the data so that it is tall. Specifically, instead of dealing with this…

Figure 4 -- Data before reshaping (as you would get it if you downloaded a .CSV file.)

Figure 4 — Data before reshaping (as you would get it if you downloaded a .CSV file.)

… we’re going to be working with this:

Figure 5 -- Reshaped data using Tableau's built-in Pivoting feature. If you're using the Qualtrics web data connector you don't need to use this.

Figure 5 — Reshaped data using Tableau’s built-in Pivoting feature. If you’re using the Qualtrics web data connector you don’t need to use this.

Once we pivot the data we’ll see the following collection of dimensions and measures in Tableau.

Figure 6 -- Reshaped data as it appears in Tableau

Figure 6 — Reshaped data as it appears in Tableau

Note: For this example I’m not going to the trouble of creating a data set that has the survey responses in both numeric and text format. The technique we explore will work equally well for when you have both as when you just have numeric values.

Grouping the related questions

To be able to come up with an elegant, extensible way to tell Tableau “don’t take into account questions where people didn’t make any selections” we’re going to need a way to group related questions together.

Note that the technique I’m about to propose is different from the ad-hoc grouping approach I describe in Using Qualtrics’ Web Data Connector and Reshaping Survey Data with Tableau 9.0.  The programmatic “group” we create here needs to be referenced in a calculated field, and you can’t do that with an ad-hoc group.

Here we’re going to take advantage of Qualtrics’ Question ID naming convention and how related questions all look like this:

Q(N)_

That is, a group of related check-all-that-apply questions might be named like this:

Q1_1
Q1_2

Q1_8

And a group of related Likert scale questions might be named like this:

Q7_1
Q7_2

Q7_13

Given this standardization, we need a mechanism that will group everything to the left of the underscore. There are several ways we can achieve this, but for this example we’ll create a new field called “Question Prefix” and define it as follows:

Figure 7 -- Calculated field to that will group related questions based on whatever comes before the underscore in the Question ID field.

Figure 7 — Calculated field to that will group related questions based on whatever comes before the underscore in the Question ID field.

Now let’s use this field and the Resp ID field to see who did not respond at all to a particular group of questions.

Figure 8 -- This collection of fields show us who did not respond at all to a particular group of questions.

Figure 8 — This collection of fields show us who did not respond at all to a particular group of questions.

So, have a look at results in the image above where we place MAX(Value) on the text button. What’s happening is that for each respondent we’re looking at the answers to Q1_1, Q1_2, Q1_3, and Q1_4 and taking the maximum value across all those questions.  Notice that there are four blanks / Nulls for 1002, 1006, 1011, and 1016 — these are the four respondents that didn’t answer the question group at all (see, I told you there would be a test.)

We do NOT want to include these people when we calculate the percentage of people that selected a particular choice, so we can place MAX(Value) on the Filters shelf and indicate that we should not include Null values, as shown below.

Figure 9 -- Filter that will exclude Null values.

Figure 9 — Filter that will exclude Null values.

Applying this filter will show us responses from people that did NOT skip the question, as shown below.

Figure 10 -- Respondents that selected at least one option.

Figure 10 — Respondents that selected at least one option.

Okay, this is great, but the visualization we want to create shouldn’t look anything like this. Indeed, we have too many things on the level of detail at this point.

So what are we going to do?

Level-of-Detail Calculations to the Rescue

Anytime you think “I need Tableau to consider some dimensions when making a calculation, but those dimensions can’t be in the visualization” a giant alarm bell should go off in your head — this is precisely the use case for a LoD calculation!

Here’s the LoD calc, that when placed on the Filters shelf, will do the trick:

Figure 11 -- LoD calc that allows us to exclude folks that skipped answering a question group.

Figure 11 — LoD calc that allows us to exclude folks that skipped answering a question group.

A translation into English would be:

Even though [Question Prefix] and [Resp ID] may not be on the viz detail, take that combination of these two fields and find the MAX([Value]). Then find only the Null values from this and exclude them.

Note that we are using FIXED instead of INCLUDE as the former will automatically create a dimension and will work no matter what other filter settings may be in place.

Putting This Altogether

First we need to create a calculation that gives us the percentage of people that selected an option from the group of check-all-that-apply questions. For this instance the following calculation will work well.

Figure 12 -- How to determine the percentage of people that selected an option.

Figure 12 — How to determine the percentage of people that selected an option.

A quick translation into English would be:

Add up everybody that answered “1” (meaning they selected an option) and divide by the number of people that answered the question.

Let’s see if this works. Consider the visualization shown below.

Figure 13 -- First attempt at showing the percentage of respondents that selected an option. This is NOT showing the correct percentage.

Figure 13 — First attempt at showing the percentage of respondents that selected an option. This is NOT showing the correct percentage.

Does this pass the “smell” test? Look at Q1_4. You may recall that we saw that 12 out of 13 people selected this option — but 12 out of 13 is 92% not, 71%.  Indeed, if we place SUM([Number of Records]) as a discrete measure on the rows shelf we see that Tableau believes 17 people responded to the question.

Figure 14 -- Without a filter Tableau believes 17 people responded to this group of questions

Figure 14 — Without a filter Tableau believes 17 people responded to this group of questions

We need to put our LoD calc on the Filters shelf and set it to True.

Figure 15 -- Removing "All of these are Null" from the respondent pool gives us the correct count of 13 and the correct percentage (12 out of 13).

Figure 15 — Removing “All of these are Null” from the respondent pool gives us the correct count of 13 and the correct percentage (12 out of 13).

Conclusion

How check-all-that-apply questions are coded is a problem with many survey tools including Qualtrics and Survey Gizmo. If you have a good data preparation tools like Alteryx you can fix the problem, but if you don’t have such a tool or don’t want to insert something into your workflow then you can get Tableau to address the problem with a Level-of-Detail calculation.

Apr 112016
 

Overview

This past week I enjoyed looking at and interacting with Matt Chambers’ car color popularity bump chart.

 Figure 1 -- Matt Chambers' car color popularity bump chart.

Figure 1 — Matt Chambers’ car color popularity bump chart.  You can find the original Datagraver visualization upon which this was based here.

The key to this dashboard is interactivity as it’s hard to parse all the car colors at once. If you hover over one at a time it’s easy follow the trends, as shown here.

Figure 2 -- Hovering over a color shows you that color’s ranking over time

Figure 2 — Hovering over a color shows you that color’s ranking over time

Showing Rank Only

Over the past few months I’ve seen a lot of people making bump charts (myself included). As much as I like them I fear that people are leaving some critical insights out of the discussion as bump charts only show ordinal information and not cardinal information. That is, they show rank but not magnitude.

Consider the bump chart above.  In 2009 White was the number one color, Black was number two, and Red was a distant sixth.

Figure 3 -- Red appears to be a distant sixth

Figure 3 — Red appears to be a distant sixth

But was Red in fact “distant” or its popularity closer than it would appear?  When you just show rank there’s no easy way to tell.

Showing Rank and Magnitude

Consider the dashboard below that shows the overall ranking and percentage popularity for car colors over the last ten years.

Figure 4 -- Ranked Bar Chart dashboard with no colors selected

Figure 4 — Ranked Bar Chart dashboard with no colors selected

Right now we can see that over the last ten years white came in first place with 22% and Red came in fifth place with 11%.  Now let’s see what happens if we select red and white, as shown below.

Figure 5 -- Comparing popularity of white and red car over the last ten years.

Figure 5 — Comparing popularity of white and red car over the last ten years.

Here we can see everything that the bump chart had plus so much more. Specifically, we can see that White was in first place for the past ten years and that Red was as high as fourth place in 2007 and as low as sixth place in 2008 and 2009. But we can also see that in 2009 White was only 50% larger than Red while in 2015 it was almost 150% larger!

Try it yourself

Click here to interact with the color popularity ranked bar chart.

Ranked Bars are Versatile

The ranked bar approach works well showing rank and magnitude over time and across different categories.

Consider the dashboard below that shows the sales for the top 20 products overall and then a ranked breakdown by one of three possible categories (Customer Segment, Region, and Year)

Figure 6 -- Overall sales / rank and sales / rank broken down by Customer Segment.

Figure 6 — Overall sales / rank and sales / rank broken down by Customer Segment.

Here we can see not only how the Bosch Full Integrated Dishwasher is ranked overall and within the four Customer Segments, but we can also see how much more and less the other products’ sales were.

Here’s the same dashboard showing a breakdown by Region.

Figure 7 -- Overall sales / rank and sales / rank broken down by Region.

Figure 7 — Overall sales / rank and sales / rank broken down by Region.

The Bosch Dishwasher is fifth overall but it isn’t even in the Top 20 in the South.  We can also see that it is Second in the East, ever-so-slightly behind the first ranked product (the Whirlpool Upright Freezer.  You can see for yourself when you interact with the dashboard that’s at the end of the post).

Here’s the same data but presented using a bump chart.

Figure 8 -- Overall sales / rank and just rank by Region.

Figure 8 — Overall sales / rank and just rank by Region.

The bump chart looks cool but we only get part of the story as I can only glean rank.

Conclusion

The bump chart is a great choice if you want to show “soft” rankings, such as what place a team came in over time, but if you want to show rank and magnitude, consider the ranked bar chart instead.

Note: for step-by-step instructions on how to build a dashboard like the one below, see Visual Ranking within a Category.

The Ranked Bar Dashboard — Kick The Tires

Mar 302016
 

Some thoughts on functionality, beauty, crown molding, and lollipop charts

Overview

I’ve been writing a book about business dashboards with Jeffrey Shaffer and Andy Cotgreave and we’ve conducted screen-sharing sessions with dozens of people and reviewed scores of dashboards. We had a particularly enjoyable jam session with Tableau Zen Master Mark Jackson last week. When we asked him why he had done something in particular he replied with a comment that has been haunting me (in a good way) ever since:

“I look at this dashboard first thing every morning. I want to look at something beautiful.”

This really resonated with me. Mark was not tasked with making a public-facing dashboard that had to compete with USA Today infographics. He just wanted to make something that was both functional and beautiful. It made me think of waking up in a lovely room with crown molding. You don’t need crown molding, but as long as it isn’t blocking sunlight or clashing with the decor it’s certainly delightful to have crown molding.

This got me thinking about a topic I come back to often — how to make visualizations that are both functional and beautiful.

Unfortunately, this isn’t so easy and often leads to people sacrificing clarity for the sake of coolitude (see “Balancing Accuracy, Engagement, and Tone” and “It’s Your Data, not the Viz, That’s Boring” for some more thoughts on the matter).  I did, however, want to share a case study that had a delightful outcome and that employed a chart type that combines the accuracy of a bar chart with a bit of the “oooh” from packed bubbles and “ahhh” from donut charts.

Marist Poll and Views of the 2016 Presidential Election

Marist Poll is one of my clients and they are tasked with providing nationwide survey results to The Wall Street Journal and NBC News.  In November 2015 they conducted a poll asking people to describe in one word the tone of the 2016 presidential election. Here were the results.

Figure 1 -- Marist Poll results in tabular form

Figure 1 — Marist Poll results in tabular form

Attempt One — Word Cloud

The results from the poll are very compelling but the results as depicted in the text table don’t exactly pop.

The client tried a word cloud as shown below.

Figure 2 -- Marist Poll results using a word cloud

Figure 2 — Marist Poll results using a word cloud

I’ll admit that the graphic “pops” but it’s hard to make sense of the six terms let alone discern that the results for “Crazy” were almost three times greater than the next most popular term.

Attempt Two — Packed Bubbles

People love circles and this chart certainly looks “cool” but what does it tell other than that the “Crazy” circle is larger than the other circles?

Figure 3 -- Marist Poll results with packed bubbles

Figure 3 — Marist Poll results using packed bubbles

Why not use a simple bar chart?

Attempt Three — A Simple Bar Chart

Here are the same results rendered using the chart type Tableau’s “Show Me” suggests you use when working with this type of data.

Figure 4 -- Marist Poll results using a bar chart

Figure 4 — Marist Poll results using a bar chart

This is a big improvement over the word cloud and packed bubbles with respect to clarity — you can easily sort the responses and see how much larger “Crazy” is than the other responses.

But the chart is a bit sterile. What can we do to make the “Crazy” pop out without distorting the information?

Attempt Four — A Colored Bar Chart

The major takeaway from the poll is that 40% of the respondents characterized the election as “Crazy.” We can make that easier to glean by making that bar a bold color and all the other bars muted, as shown here.

Figure 5 -- Marist Poll results using a bar chart with one bar colored differently

Figure 5 — Marist Poll results using a bar chart with one bar colored differently

I’ll confess that this does the trick for me, but the client wanted to pursue some other options so we looked into a lollipop chart.

Attempt Five — Lollipop Chart

The lollipop chart is not native to Tableau;  it’s simply a dual axis chart that superimposes a circle chart on top of a bar chart that has very thin bars.

Figure 6 -- Marist Poll results as a lollipop chart

Figure 6 — Marist Poll results using a lollipop chart

This strikes me as an excellent compromise between the analytical integrity of the bar chart and the “ooh… circles” appeal of the packed bubbles.  I have no qualms about using this chart type.

But there’s still something if we want the chart to have some impact.

Final Attempt — Adding a Compelling Title

A concise, descriptive title can make a huge difference in garnering attention and making a chart more memorable. In the example below the client added some graphic design artistry to the typography to make the title compelling.

Figure 7 -- Marist Poll results as a lollipop chart with compelling headline.  I love this.

Figure 7 — Marist Poll results using a lollipop chart with compelling headline.  I love this.

Conclusion

My bass-playing friends will probably agree that “groove” is more important than “chops.”  That is, being able to play “in the pocket” with a rock-steady beat is more important than being able to play a great solo with a flurry of notes all over the neck.

But it sure is great to be able to do both.

The same goes for data visualization. Functionality needs to come first, then beauty.

But it sure is great to have both.

And in many cases, with a little extra effort, you can have both.

So go ahead, try putting some “crown molding” into your data visualizations and delight yourself and your stakeholders.

 

Mar 172016
 

Overview

I’m a big fan of Andy Kriebel’s and Andy Cotgreave’s Makeover Monday challenge. For those of you not familiar with this, each week Kriebel and Cotgreave find an existing visualization / data set and ask the data visualization community to come up with alternative ways to present the same data.

As Cotgreave points out in one of his blog posts “It’s about using a tool to debate data. It’s about improving people’s data literacy.”

With one major exception that I’ll discuss in a moment the challenge is meeting its goals as each week several dozen people participate and the submissions and accompanying discussions have been enormously valuable.

But there was one week where the community failed.

Worse than that, the community did some damage that will be difficult to repair.

Bad Data Make Bad Vizzes Make Bogus Conclusions

Week four of the Makeover Monday challenge used survey data from GOBankingRates that posed the question “how much money do you have saved in your savings account?” Here are some of the baseless conclusions from people that participated in the makeover:

Figure 1

Figure 1 — From the source article that spawned the makeover.  Yes, the exploding donut needs a makeover, but it’s the headline “Survey finds that two-thirds of Americans don’t have enough money saved” that presents the bigger problem.

  • Americans Don’t Have Enough Money Saved (See link).
  • 71% of Americans Have Less than $1,000 in Savings. Yikes! (See link).
  • Americans Just Aren’t Saving Money (See link).
  • Most Americans Have Miniscule Savings (See link).
  • 80% of Americans Have Less than $5,000 in Savings! (See link).
  • Americans Are Not Saving Enough Money! (See link).
  • Americans Have Too Little Savings (See link).

So, what’s the problem?

It turns out the key finding from the original publication is not true — and thanks to the makeovers that spurious finding has been amplified dozens of times.

How did this happen?

Let’s dig into the data a little bit.

Is There a Relationship Between Age and Savings?

As I mentioned before I think the Monday Makeover Challenge is great and I’ve participated in a couple of them. I started to work on this one and took a stab at showing the relationship between age and savings, as shown here.

Figure 2 -- Divergent stacked bar chart showing the percentage of people that have different savings amount, sorted by age

Figure 2 — Divergent stacked bar chart showing the percentage of people that have different savings amounts, sorted by age

This looked odd to me as I expected to see a correlation between age and savings; that is, I expected to see a lot more blue among Seniors and Baby Boomers.

I decided to make the demarcations less granular and just compare people with minimal savings and those with $1,000 or more in savings, as shown here.

Figure 3 — Less granular divergent stacked bar chart

This result seemed way off, so either my supposition is wildly incorrect (i.e., as people get older they save more) or there was something wrong with the data.

Note: I try to remind people that Tableau isn’t just for reporting interesting findings. It’s a remarkably useful tool for finding flaws in the data.

It turns out that while there is indeed something wrong with the data, there was a much bigger problem:

Most people didn’t bother to look at the actual question the survey asked.

What the Survey Asked

The survey asked “How much money do you have saved in your savings account?”  It did not ask “How much money do you have saved?

The difference is titanic as the average American savings account yields but .06 percent interest!  That’s infinitesimal — you might as well stick your money in a mattress!

Indeed, I am of the Baby Boomer generation and I have but $20 in my savings account — but (thankfully) more in my savings.

So, the vast majority of people that participated in the makeover didn’t bother to look at the actual question and came to — and published — a bogus conclusion.

Were there any other problems with the survey?

You betcha.

What’s Wrong with the Survey?

A visualization is only as good as its underlying data and the data in question has nothing to do with the savings habits of Americans; it only has to do with having a savings account.

But there are other shortcomings with the survey that should make us question whether the data is even useful for analyzing how much money Americans have sitting in a savings account.

Consider this excellent review of the same Makeover Monday challenge from Christophe Cariou.  He points out the following shortcomings with the survey itself:

  • In the article, we read: ‘The responses are representative of the U.S. internet population’. It is therefore not representative of the US population. See this report by Pew Research Center for age and online access.
  • We also read ‘Demographic information was not available for all respondents, and analysis of responses by demographics is based solely on responses for which the targeted demographic information was available.’ Normally, if it was demographically representative, this would be clarified. This comment adds a doubt.
  • The average savings amount in the article is the sum of the averages of the groups divided by 6. It is not weighted by the size of each group.

Note: Kudos to Bridget Cogley who also saw the problems with the conclusions when the makeovers first appeared in late January 2016.

Further note: In a subsequent makeover challenge blog post Cotgreave alerted participants to questionable data.

So, Where Exactly is the Harm?

So, dozens of people have created visualizations based on bad data and came up with bogus conclusions. Given the number of articles from allegedly reliable sources reporting shortcomings in savings, what’s the harm of sounding an alarm bell?

I suppose if you are an “ends justify the means” type of person then it’s fine to publish bogus findings as long as they change behavior in a positive way.

But I know many of the people in this community and they would be aghast at using data visualization this way.

I also fear that with collective missteps like this people will question the ability of makeover participants to relay accurate information.

So What Should We Do?

Andy Cotgreave and Andy Kriebel have earned their leadership positions in the data visualization community, so I hope they will make note of this makeover mishap and encourage people that published the bogus result to modify their headlines.

I also strongly encourage anyone working in data visualization to understand the data — warts and all — before rushing to publish. Andy Kriebel is providing the data set and we shouldn’t ask him to find all the flaws in it.  Indeed, that’s part of our job.

Finally, I ask others in the community to be more diligent: only publish work that has been carefully vetted and do not tolerate unsubstantiated work from others.

While it’s true that nothing terrible will happen if more Americans open savings accounts, there may be other situations where publishing spurious conclusions will do some serious damage.

Feb 112016
 

Overview

As I’m mentioned before the number one impediment to success with visualizing survey data in Tableau is getting the data in the right format so you can do all the dazzling things you want to do.

Up until very recently none of the survey tools vendors had done anything to make things easy so you had to pursue one of the following options:

  • Use the Tableau add-in for Microsoft Excel (only works on a PC and can be very cumbersome if you have data that needs to be updated.)
  • Use the pivot feature in Tableau 9.x (works great but is missing some important functionality, especially for larger surveys.)
  • Use Alteryx (Best-in-class but it has its own learning curve and is pricey.)
  • Get somebody to do this for you (like that’s going to happen.)

But now there is another choice and it warrants its own blog post.

Say hello to the Qualtrics web connector

Survey vendor Qualtrics has developed a web data connector so that you can connect directly to your Qualtrics surveys from inside Tableau. While the connector does not have everything I would want (more on this in a moment) this offering presents a great step forward for Qualtrics users that want to better see and understand their data using Tableau.

In this blog post we’ll look at the following:

  • Wide vs. tall data (and why we need it to be tall)
  • How to connect to Qualtrics survey data using Tableau
  • How you can get the data “just so” using data blending
  • Dealing with null values
  • Benefits and possible drawbacks

Wide vs. Tall Data

Most survey tools present the data in a one-column-for-each-question format that looks something like this.

Figure 1 -- One row per respondent, one column for each question

Figure 1 — One row per respondent, one column for each question

We will be absolutely miserable working with the data in this format so we need something that can reshape the data so that we have a separate row for each question, as shown below.

Figure 2 -- Each respondent is listed many times (one row for each question)

Figure 2 — Each respondent is listed many times (one row for each question)

Actually, we want to do more than just reshape the data.  Ideally we want to get something that looks like this.

Figure 3 -- Ideal format for survey data

Figure 3 — Ideal format for survey data

Qualtrics can get us most of the way there, and with some simple data blending it can get us all of the way there.

How to connect to Qualtrics survey data

Let’s assume you have a Qualtrics ID and at least one survey with some data.

  1. Log into Qualtrics.com and access the survey to which you will want to connect.
  2. Click the Responses tab and click the Export Data button, as shown here.
    04_InsideQ
  3. When the Export Responses dialog box appears, click Tableau, then copy the URL, as shown below.
    05_QDialog
  4. Close the dialog box and open Tableau.
  5. From within Tableau indicate you want to connect to a new data source and select Web Data Connector.
    06_WebConnector
  6. When the Web Data Connector dialog box appears paste the survey URL that you copied from Qualtrics.
  7. When asked, enter your Qualtrics username and password and click Log In.
    Qualtrics will then ask you which fields you want from your survey and which fields you will want to transpose
  8. Start by indicating that you want to select all fields and transpose all the fields, as shown below.
    07_Which fields
  9. Indicate which fields you won’t need and which you will not need transposed. In this example we will want Response ID, Q1: Gender, Q2: Region and Q3: Age to be separate columns, but all other selected fields need  to be transposed (pivoted).
    08_JustThese
  10. Click Import Survey Responses and wait a little bit as Tableau imports the data and create a Table Data Extract file.

So, just what do we have here?

After specifying that we want the vast majority of questions (fields) to be transposed, here’s what we’ll see from inside Tableau.

09_Fields In Tableau

Figure 4 — Dimensions and Measures generated by the Qualtrics Web Data Connector

It may not look like much but the fact that we get both text and numeric responses to the survey questions will be a huge time saver.

Here’s what we get if we place Question ID, Question Description, Textual Answer, and Numeric Answer on the Rows shelf.  Please note that Numeric Answer has been converted to discrete so we can easily see how the text responses line up with the numeric responses.

10_Whatdowehavehere

Figure 5 — “What do we have here?” viz that shows Question ID, human-readable translation of the Question ID, all possible text responses, and all possible numeric responses.

Here I can see how each Question ID maps to a human-readable translation of the Question ID (i.e., the Question Description) as well as all possible text responses and all possible numeric responses.  Yes, there are some null values here, but we’ll deal with them in a moment.

I wish I can express just how beautiful this is! This has almost everything I could possibly want and I barely had to lift a finger.

Okay, what did I mean by “almost”?

This particular survey does not contain many questions so I could work with what I have right now without too many problems.  But if I had a survey with hundreds of questions then I would benefit greatly from having some way to group related questions together.

Fortunately I can do this easily using Tableau’s blending feature.

How you can get the data “just so” using data blending

I need to create a separate Excel file that groups the Question IDs so that I can more easily find and manipulate the data in Tableau.

11_Helpfer File

Figure 6 — Example of a “Question Helper” file.

Note:  In some cases it isn’t just convenience that warrants creating this “helper” file.  NPS analysis and Importance vs. Satisfaction analysis will be very difficult without building a scaffolding file like the one shown above.

Let’s see how to create the blend.

  1. On a new worksheet drag Question ID onto rows.
  2. Click the Add New Data Source
  3. Connect to the Helper File indicate whether or not you want to create an Extract (of course you do!)
    Notice that there is a link field on Question ID in what will be the secondary data source.
    12_LinkField
  4. Drag Grouping the right of Question ID and then Qtype to the right of Grouping as shown below.
    13_TempBlend
  5. Right-click the Grouping pill and select Create Primary Group.
  6. Rename the field Question Grouping as shown below.
    14_QuesitonGrouping
  7. Right-click the Qtype pill and select Create Primary Group.
  8. Rename the field Qtype and click OK.
    At this point we no longer need the secondary data source as we’ve created some very useful groups in the primary data source.
  9. Remove the Grouping and Qtype pills from the Rows shelf.
  10. Close the secondary data source.
  11. Drag Question Grouping and QType to the left of Question ID as shown below.
    15_PerfectMapping

Putting this to use

Let’s see how to put all this to good use; specifically, let’s see how to visualize a check-all-that-apply question.

  1. In a new sheet drag Question Grouping to the Filters shelf and select What do you measure, as shown below.
    16_Filters
  2. Drag Question Description to rows.
  3. Create a new calculated field called CheckAll_Percent and define is as follows:
    17_CheckAll_Percent
  4. Right-click the newly created measure and indicate you want to change its default numeric format to Percentage with 0 decimal places.
  5. Drag CheckAll_Percent to the Columns shelf, turn on mark labels, and sort in descending order.

    Figure 8 -- Percentage of respondents that measure Pulse Rate, Adrenaline Production, and so on.

    Figure 8 — Percentage of respondents that measure Pulse Rate, Adrenaline Production, and so on.

Dealing with nulls

As we saw earlier we have some nulls and we need to make sure they don’t muck up the visualization.

As of this writing the Qualtrics Web Connector writes a 0 as the numeric answer and a blank as a textual answer for Nulls.  I would have expected either a “99” or an actual NULL value as we cannot simply filter the 0 values as sometimes a 0 means “no”, sometimes it may mean “strongly disagree,” and so on.

Fortunately, the Qualtrics Web Connector does write an unambiguous blank that we can use as our filter.

Let’s see how to remove the null values in the visualization above.

  1. Drag Textual Answer to the Filters shelf.
  2. When the Filter dialog box appears indicate that you want to see Fewer values, as shown below.
    19_Fewer Values
  3. Indicate that you want to Exclude blanks, as shown below.
    20_Exclude

Benefits and possible drawbacks

So, just what are the benefits and potential drawbacks of using the Qualtrics web connector?

Benefits

There are two clear benefits to using the Qualtrics web connector:

  1. You can get the data the way you need it with very little effort – there’s practically no learning curve.
  2. As you get more survey responses you just need to update the Extract. The workflow is seamless.

Possible drawbacks

  1. What happens when there are many questions? It’s unclear how well this will work when you have many questions, let alone many survey responses. Navigating the connection dialog to find the fields you want and don’t want to transpose may prove cumbersome.
  2. Weighted data. For those that need it, there needs to be some mechanism in the workflow to weigh survey responses. This is something that can be automated using a tool such as Alteryx.
  3. The Qualtrics web connector is not free and the pricing will vary based on a variety of factors. Given this, there may be cases where it makes sense to sacrifice Qulatrics’ ease and seamlessness for a more powerful and flexible ETL solution.

Conclusion

If you are Qualtrics user who wants to explore and visualize data using Tableau you should definitely take the Qualtrics web connector for a test drive as Qualtrics has made the task of getting survey data into Tableau easy and seamless.

If you’re using a different survey tool and want to visualize your data with Tableau, you may want to consider switching to Qualtrics.

 

 

 

 

 Posted by on February 11, 2016 2) Visualizing Survey Data, Blog Tagged with: ,  14 Responses »