Dec 032012

We begin a new feature this month where I look at some recently-published data visualizations and offer suggestions on how they can be improved.

I’ll start with the MASIE Center’s Mobile Pulse Survey results.  For those of you that don’t know, The MASIE Center is a Saratoga Springs, NY think tank focused on how organizations can support learning and knowledge within the workforce.  They do great work.

So, why focus on this report?  There are three reasons:

  1. The subject is survey data and as readers of this blog know I’ve done a lot of work in this area (see
  2. The subject matter, mobile learning, is near and dear to me after my stint with the eLearning Guild.
  3. There’s good stuff in the data, but the published visualizations make it difficult to understand what the data is trying to say.

Ah, Likert-Scale Questions

Consider this chart below that attempts to describe the results to the question “Current level of interest in providing the following learning elements on mobile devices.”

This chart is a tough read. With the exception of the fourth item, “Access to the Web” which clearly has a really big “Strong Interest” bar, it’s very difficult to determine which of the ten reasons are high on respondents’ lists and which are low.

Consider instead how easy the chart below is to “grok” when we superimpose an average Likert score atop a divergent stacked bar chart.

With this rendering it’s very easy to see that “Access to Corporate Databases and Intranets” is only slightly behind “Access to the Web”.  It’s also trivial to sort the ten items by respondent sentiment.

Particularly surprising to me is the negative sentiment (e.g., no interest or low interest) towards accessing simulations.  I would have expected there to be quite a bit more interest here.  That fact was buried in the other chart.

Note: For those of you that want to see the exact values for particular items as well as just compare positive vs. negative sentiment, there is a fully-interactive version of this visualization at the end of this blog post.

Yes / No Questions

Here’s another chart that attempts to show responses to which factors cause concern about Mobile Learning.


Why diamonds, and why two sets of them?   Here’s an alternative that I think is easier to understand and prioritize.


There’s some great stuff in the Masie report, but the published charts are obfuscating rather than illuminating the data.

Here’s the interactive version of the first chart. This would be WAY cooler if we could filter by industry or company size, but that data is not available to me.

Have fun.


[suffusion-the-author display='description']
 Posted by on December 3, 2012 3) Mostly Monthly Makeovers, Blog  Add comments

  7 Responses to “Mostly Monthly Makeover — Masie’s Mobile Pulse Survey”

Comments (5) Pingbacks (2)
  1. What if you dropped the “Access to” words from each header, made the chart area wider, and row height thinner? maybe something like:
    for me this makes it easy to all have about the same Moderate Interest, and I also feel like it highlights the bottom 3 more.

    • Joe,

      Totally agree on making the questions less wordy. The width of the chart is somewhat constrained by the width of the blog post.

      I’m on the fence about bar width. I’ve seen Stephen Few make bar charts as wide as possible and they look good and read well.


  2. Great work Steve! I really like how you put the average on a circle. This does a good job of emphasizing the value while at the same time allowing comparison to the other circles and it makes the order clear.

  3. Great work, but I’m one who doesn’t like the averages.  As Likert is ordinal data my feeling is that you shouldn’t average it (to do this you need to know that strong interest is exactly twice the strength of feeling as moderate interest, which you don’t).

    Since I published my original post on the divergent stacked bar chart (what I called the Net Stacked Distribution) we’ve done lots of usability tests on displays that incorporate them and found they’re not universally understood.  For users who are quite numerate they are really liked but many users don’t understand them until explained.  As always you need to know your audience.

    The two that are most easily understood are either simple counts of positive or the net (count of positive – count of negative) – a calculation similar to the Net Promoter Score.  My preference is the net.  

    On most client projects we’re showing small multiples of questions split by some category (eg region) as bullet charts with the background bar showing the average excluding of the other categories.  

    • Andrew,

      You make many good points. With respect to showing the average as a superimposed circle, I wonder what Rensis Likert, the developer of the scale (applying 5 for this, 4 for that, etc.) would say. No matter, as the interactive version of the visualization allows you to suppress this display, hide the neutrals, and only show positive vs. negative.

      Although if you’re only showing positive vs. negative, why not just show the percentage of folks that indicated positive? I think that is what you are getting at with the bullet chart with the shaded average. I do like that — and indeed, many of my clients seemed obsessed with just knowing the percentage of folks that selected “the top box” — but many others will want to know the degree of sentiment, hence the divergent stacked bar approach.

      But as you said, it does come down to knowing your audience.

Leave a Reply to swexler Cancel reply