Aug 232017
 

Why you may be missing important insights if you only look at Percent Top 2 Boxes.

August 23, 2017

Overview

Anyone who has looked to this blog for insights into visualizing survey data knows that my “go to” visualization for Likert scale sentiment data is a divergent stacked bar chart (Figure 1).

Figure 1 -- Divergent stacked bar chart for a collection of 5-point Likert scale questions

Figure 1 — Divergent stacked bar chart for a collection of 5-point Likert scale questions

You might prefer grouping all the positives and negatives together, showing only a three-point scale.  Or perhaps you question having the neutrals “straddle the fence” as it were, with half being positive and half being negative.  These are fair points that I’m happy to debate at another time as right now I want to focus on what happens when we need to compare survey results between two periods.

Showing responses for more than one period

As much as I love the divergent stacked bar chart, it can become a little difficult to parse when you show more than one period for more than one question. Consider the chart below where we compare results for 2017 vs. 2016 (Figure 2).

Figure 2 -- Showing responses for two different periods

Figure 2 — Showing responses for two different periods

As comfortable as I am with seeing how sentiment skews positive or negative with a divergent stacked bar chart, I’m at a loss to compare the results across two different years. The only thing that really stands out is that there appears to be a pretty big difference between 2017 vs. 2016 for “Really important issue 7” at the bottom of the chart.

The allure of Percent Top 2 Boxes

It’s times like these when focusing on the percentage of respondents that selected Strongly agree or Generally agree (Percent Top 2 Boxes) is very tempting.  Consider the connected dot plot in Figure 3.

Figure 3 -- Connected dot plot showing difference between Percent Top 2 Boxes in 2017 and 2016.

Figure 3 — Connected dot plot showing difference between Percent Top 2 Boxes in 2017 and 2016.

Hey, that’s clear and easy to read. Indeed, this is one of my recommended approaches for comparing Importance vs. Satisfaction and it works great for comparing results across two time periods.

So, we’re all done, right?

Not so fast. While this approach will work in many cases, you should never stop exploring as there may be something important that remains hidden when you only show Percent Top 2 Boxes.

It’s not the economy, it’s the neutrals (stupid)

I was recently working with a client who had surveyed a large group about several contentious topics. The client believed that, much like the population of the United States, the surveyed population had become more polarized over the past year, at least with respect to these survey topics.

In reviewing the results for three questions, if we just focus on the positives (Percentage Top 2 Boxes) things look like they have improved (Figure 4.)

Figure 4 -- Connected dot plot showing change in positives (Percentage Top 2 Boxes) between 2017 and 2016.

Figure 4 — Connected dot plot showing change in positives (Percentage Top 2 Boxes) between 2017 and 2016.

See? We have more positives (green) now than we did a year ago (gray.)

This may be true, but it only tells part of the story.

Consider the divergent stacked bar chart shown in Figure 5.

Figure 5 -- 5-Point Divergent stacked bar char comparing results from 2017 and 2016

Figure 5 — 5-Point Divergent stacked bar char comparing results from 2017 and 2016

Woah… there’s something very interesting going on here, but it’s very hard to see.  Maybe if we combine all the positives and negatives the “ah-ha” will be easier to decipher (Figure 6).

Figure 6 -- 3-Point Divergent stacked bar char comparing results from 2017 and 2016. There are big differences between the two time periods, but they are hard to see.

Figure 6 — 3-Point Divergent stacked bar char comparing results from 2017 and 2016. There are big differences between the two time periods, but they are hard to see.

Well, that’s a little better, but the story — and it’s a really big story — is still hidden. Let’s see what happens if we abandon both the connected dot plot and divergent stacked bar chart and instead try a slopegraph (actually, a distributed slopegraph, Figure 7).

Figure 7 -- Distributed slopegraph showing change in positives, neutrals, and negatives.

Figure 7 — Distributed slopegraph showing change in positives, neutrals, and negatives.

Now we can see it!  Just look at the gray lines showing the dramatic change in neutrals.  My client’s hunch was correct — the population has become much more polarized as the percentage of neutrals have plummeted while the percentage of people expressing both positive and negative sentiment has increased. You cannot see this at all with the connected dot plot and it’s hard to glean from the divergent stacked bar chart.

There is no, one best chart for every situation

I had the good fortune to attend one of Cole Nussbaumer Knaflic’s Storytelling with Data workshops. She uses a wonderful metaphor in describing how much work it can take to present just one, really good finding. I paraphrase:

“You have to shuck a lot of oysters to find a single pearl. In your presentations, don’t show all the shells you shucked; just show the pearl.”

For this last example, if I only had 30 seconds of the chief stakeholder’s time I would just show the distributed slopegraph as it is the “pearl.”  It clearly and concisely imparts the biggest finding for the data set: the population has become considerably more polarized for all three issues.

But…

What happens if the chief stakeholder wants to know more? I would be armed with an interactive dashboard to answer questions like these:

“The people that disagree… how many of them strongly disagree?”

“The people that agree… how many of them strongly agree?”

“Are these findings consistent across the entire organization, or only in some areas?”

Conclusion

So, when showing changes in sentiment over time, which chart is best? The connected dot plot? The divergent stacked bar chart? The distributed slopegraph?

To quote my fellow author of the Big Book of Dashboards, Andy Cotgreave, “it depends.”

You should be prepared to apply all three approaches and choose the one that imparts the greatest understanding with the least amount of effort.

Note

I’ve had a number of debates with people about how I prefer to handle neutrals (half on the negative side and half on the positive side). If you find that troubling you can place the neutrals to one side, as shown in Figure 8.

Figure 8 -- Neutrals placed to one side providing a common baseline for comparison.

Figure 8 — Neutrals placed to one side providing a common baseline for comparison.

Aug 022017
 

August 3, 2017

In my last blog post I pointed out that I wish I had put BANs (big-ass numbers) in the Churn dashboard featured in chapter 24 of the book (see http://www.datarevelations.com/iterate.html.)

I had a similar experience this week when I revisited the Net Promoter Score dashboard from Chapter 17.  I’ve been reading Don Norman’s book The Design of Everyday Things and have been thinking about how to apply many of its principles to dashboard design.

On thing you can do to help users decode your work is to ditch the legend and add a color key to your dashboard title.

Here’s the Net Promoter Score dashboard as we present it in the book.  Notice the color legend towards the bottom right corner.

Figure 1 -- Net Promoter Score dashboard from The Big Book of Dashboards.

Figure 1 — Net Promoter Score dashboard from The Big Book of Dashboards.

Why did I place the legend out of the natural “flow” of how people would look at the dashboard? Why not just make the color coding part of the dashboard title, as shown below?

Figure 2 -- Making the color legend part of the title. 

Figure 2 — Making the color legend part of the title.

I’m not losing sleep over this as this is probably a dashboard that people will be looking at on a regular basis; that is, once they know what “blue” means they won’t  need to look at the legend.

But…

Every user will have his / her “first time” with a dashboard, so I recommend that wherever possible make the legend part of the “flow.” For example, instead of the legend being an appendage, off to the side of the dashboard…

Figure 3 -- Color legend as an appendage.

Figure 3 — Color legend as an appendage.

Consider making the color legend part of the title, as shown here.

Figure 4 -- Color coding integrated into the title.

Figure 4 — Color coding integrated into the title.

 

Jul 052017
 

The Importance of feedback, iteration, and constant improvement in data visualization (and finding people that will tell you when you are full of crap.)

July 5, 2017

Overview

People ask me how three opinionated people can write a book like The Big Book of Dashboards together.  Didn’t we disagree on things?  How were we able to work out our differences?

I can’t speak for Jeff Shaffer and Andy Cotgreave, but I’m very glad I found two fellow authors that would challenge every assertion I had, as it made for a much better book.

And why did it work?

It worked because we had one, overarching goal in common.

Clarity.

When people ask me about the process I think of a band breaking up because of “artistic differences.” That didn’t happen with the three of us because we weren’t trying to create art.  For certain, we wanted dashboards that were beautiful, but more than anything else we wanted dashboards that allow the largest number of people to get the greatest degree of understanding with a minimum amount of effort.

Let me take you through a case study on how the Churn dashboard came into fruition and how following the approach we used can help you make better dashboards.

Background

I had just finished presenting the third day of three days’ worth of intensive Tableau training when an attendee showed me a data set like the one below.

Figure 1 -- Subscribers gained and lost over time within different divisions

Figure 1 — Subscribers gained and lost over time within different divisions

I asked the attendee what she thought she needed to be able to show and she said it was important to know when and where things were really good (i.e., many more people signing up than cancelling) and where and when things were really bad (i.e., more people cancelling than signing up).

She also stressed that management would insist on seeing the actual numbers and not just charts.

This is not a horse, It’s a dashboard

Here’s a famous quote attributed to car designer Alec Issigonis:

“a camel is a horse designed by a committee.”

The main idea is that you will run into problems if you attempt to incorporate many people’s opinions into a single project.

This was not the case with the Churn dashboard as we received more input from more people over a longer period than any other dashboard in the book — and it resulted in a much better product than if I had just gone at it alone.

Let’s look at the evolution of the dashboard.

Churn, take one

Here’s an image of one of my first attempts to show what was happening for Division A.

Figure 2 -- Early attempt at showing churn.

Figure 2 — Early attempt at showing churn.

Starting with the left side of the top chart, we see a starting point for the month (0 for January, 30 for February, 20 for March, etc.) the number of people subscribing (the gray bars going up) and the number of people cancelling (the red bars going down).  It’s easy to see that I had more people subscribing than cancelling in January, and more people cancelling than subscribing in February.

The second chart shows the running sum over time.

Churn, takes two through fifty

Here’s a collage of some additional endeavors, many of which I didn’t even bother to share with others.

Figure 3 — A collage of multiple attempts to show churn.

Figure 3 — A collage of multiple attempts to show churn.

Most of my attempts were fashioned around some type of GANTT / Waterfall chart but one chart that showed promise for a small group of reviewers was a juxtaposed area chart, dubbed the “mountain” chart by one client who was kind enough to give me feedback.

Figure 4 -- The "mountain" chart.  Beware of dual axes charts.

Figure 4 — The “mountain” chart.  Beware of dual axes charts.

While some people “got” this most had a problem with the negative numbers (the cancellations depicted as red mountains) being displayed as a positive.  The idea was to allow people to see in which months the negatives exceeded the positives and you can in fact see this easily (February, May, and especially July).  But most people were simply confused, even after receiving an explanation of how the chart worked.

In addition, superimposing a second chart (in this case the running total line) almost always invites confusion as people must figure out how the axes work (e.g., “do the numbers on the left axis apply to the area chart or to the line?)

Getting closer, but Andy doesn’t buy it (and I’m glad he didn’t)

I thought I had a winner in the chart shown below.

Figure 5 -- Overly complicated waterfall chart.

Figure 5 — Overly complicated waterfall chart.

I showed this to Andy and he just didn’t buy it.  It was then that I realized that I had lost my “fresh eyes” and what was clear to me was not clear to somebody else, even somebody as adept at deciphering charts as Andy. Andy explained that he was having trouble with the spacing between charts and the running totals. It was just too hard for him to parse.

I took the feedback to heart and realized that the biggest problem was that there should be no spacing between the gray bar and the red bar for a particular month, but to get that clustering and spacing I would need to work around Tableau’s tendency not to cluster bar charts.

Fortunately, Jonathan Drummey had written a blog post on how to cajole Tableau into clustering the bars within each month together and I was able to fashion this view, which made it into the final dashboard.

Figure 6 -- Gains, losses, and running total, all in one reasonably easy-to-understand chart.

Figure 6 — Gains, losses, and running total, all in one reasonably easy-to-understand chart.

Note: I don’t expect people unfamiliar with this chart type to be able to read it without some instruction. As with a bullet chart and other chart types that people may never have seen before, when you publish a novel chart you will have to either sit down with the audience member or prepare a short video explaining how the pieces fit together.

Showing the details, but Jeff doesn’t buy it (and I’m glad he didn’t)

You may recall that one of the requirements is that people using the dashboard would need to see the numbers driving the chart. I suggested adding the text table shown below.

Figure 7 -- Details, details, details

Figure 7 — Details, details, details

When I showed this to Jeff there was a long pause, and then I recall him saying something along the lines that he didn’t think this added much to the analysis.  By this time I had worked with Jeff for well over a year and I knew that “I don’t think this adds much” was Jeff’s way of politely telling me that he hated that component of the dashboard.

I started to argue with him that there was a stated demand by the audience to show the actual numbers driving the charts when I realized that Jeff was in fact correct — just showing the numbers didn’t add much and there was a better way to meet the requirement and provide additional insight.

Use a highlight table.

Figure 8 -- D’oh! How did I miss this?  I’m usually the one yelling at people for just having a text table when they can instead have a value-added text table. Just look at July in the East!

Figure 8 — D’oh! How did I miss this?  I’m usually the one yelling at people for just having a text table when they can instead have a value-added text table. Just look at East in July!

I wish I had put in BANs!

I got a great deal from reviewing the dashboards other people submitted to the book and now wish I could go back in time and borrow some techniques from those dashboards and apply them to my own. Indeed, there isn’t one dashboard that I built for the book that I wouldn’t like to modify and that is certainly the case with the Churn dashboard.  Here’s the version that is in the book.

Figure 9 -- Churn dashboard, as shown in The Big Book of Dashboard

Figure 9 — Churn dashboard, as shown in The Big Book of Dashboards

Here’s the dashboard I would submit now.

Figure 10 -- Churn dashboard with BANs (Big-Ass Numbers)

Figure 10 — Churn dashboard with BANs (Big-Ass Numbers)

See the difference? There are BANs (Big-Ass Numbers) along the top and, as I’ve written previously, these elements can do a lot to help people understand key components of a dashboard: they can be conversation starters (and finishers), provide context to adjacent charts, and serve as a universal color legend.

Conclusion and Resources

If I could only make one recommendation on how to make better dashboards it would be to find people who will give you good, constructive feedback on whether what you’ve built is as clear as you think it is. Gird yourself for a lot of revisions and be prepared to add refinements, but it will be more than worth it.

Want to know more about the Churn dashboard? That chapter from The Big Book of Dashboards is available online at http://bit.ly/dashboardsbook

Do you work with Tableau and want to download the Churn packaged workbook?  You can download it from http://bigbookofdashboards.com/dashboards.html.

Want to purchase The Big Book of Dashboards? You can get it here.

Postscript: I asked Jeff and Andy to review this post before it went live.  Jeff had some ideas on how I might modify the BANs to make them clearer.  It never ends.

 Posted by on July 5, 2017 Blog 5 Responses »
May 102017
 

May 10, 2017

Overview

Most organizations want to wildly exceed customer expectations for all facets of all their products and services, but if your organization is like most, you’re not going to be able to do this. Therefore, how should you allocate money and resources?

First, make sure you are not putting time and attention into things that aren’t important to your customers and make sure you satisfy customers with the things that are important.

One way to do this is to create a survey that contains two parallel sets of questions that ask customers to indicate the importance of certain features / services with how satisfied they are with those products and services.  A snippet of what this might look like to a survey taker is shown in Figure 1.

Figure 1 -- How the importance vs. satisfaction questions might appear to the person taking the survey.

Figure 1 — How the importance vs. satisfaction questions might appear to the person taking the survey.

How to Visualize the Results

I’ve come up with a half dozen ways to show the results and will share three approaches in this blog post.  All three approaches use the concept of “Top 2 Boxes” where we compare the percentage of people who indicated Important or Very Important (the top two possible choices out of five for importance) and Satisfied or Very Satisfied (again, the top two choices for Satisfaction).

Bar-In-Bar Chart

Figure 2 shows a bar-in-bar chart, sorted by the items that are most important.

Figure 2 -- Bar-in-bar chart

Figure 2 — Bar-in-bar chart

This works fine, as would having a bar and a vertical reference line.

It’s easy to see that we are disappointing our customers in everything except the least important category and that the gap between importance and satisfaction is particular pronounced in Ability to Customer UI (we’re not doing so well in Response Time, 24-7 Support, and East of Use, either.)

Scatterplot with 45-degree line

Figure 3 shows a scatterplot that compares the percent top 2 boxes for Importance plotted against the percent top 2 boxes for Satisfaction where each mark is a different attribute in our study.

Figure 3 -- Scatterplot with 45-degree reference line

Figure 3 — Scatterplot with 45-degree reference line

The goal is to be as close to the 45-degree line as possible in that you want to match satisfaction with importance. That is, you don’t want to underserve customers (have marks below the line) but you probably don’t want to overserve, either, as marks above the line suggest you may be putting to many resources into things that are not that important to your customers.

As with the previous example it’s easy to see the one place where we are exceeding expectations and the three places where we’re quite a bit behind.

Dot Plot with Line

Of the half dozen or so approaches the one I like most is the connected dot plot, shown in Figure 4.

Figure 4 -- Connected dot plot. This is the viz I like the most.

Figure 4 — Connected dot plot. This is the viz I like the most.

(I placed “I like most” in italics because all the visualizations I’ve shown “work” and one of them might resonate more with your audience than this one.  Just because I like it doesn’t mean it will be the best for your organization so get feedback before deploying.)

In the connected dot plot the dots show the top 2 boxes for importance compared to the top 2 boxes for satisfaction.  The line between them underscores the gap.

I like this viz because it is sortable and easy to see where the gaps are most pronounced.

But what about a Divergent Stacked Bar Chart?

Yes, this is my “go to” viz for Likert-scale things and I do in fact incorporate such a view in the drill-down dashboard found at the end of this blog post. I did in fact experiment with the view but found that while it worked for comparing one feature at a time it was difficult to understand when comparing all 10 features (See Figure 5.)

Figure 5 -- Divergent stacked bar overload (too much of a good thing).

Figure 5 — Divergent stacked bar overload (too much of a good thing).

How to Build This — Make Sure the Data is Set Up Correctly

As with everything survey related, it’s critical that the data be set up properly. In this case for each Question ID we have something that maps that ID to a human readable question / feature and groups related questions together, as shown in Figure 6.

Figure 6 -- Mapping the question IDs to human readable form and grouping related questions

Figure 6 — Mapping the question IDs to human readable form and grouping related questions

Having the data set up “just so” allows us to quickly build a useful, albeit hard to parse, comparison of Importance vs. Satisfaction, as shown in Figure 7.

Figure 7 -- Quick and dirty comparison of importance vs. satisfaction.

Figure 7 — Quick and dirty comparison of importance vs. satisfaction.

Here we are just showing the questions that pertain to Importance and Satisfaction (1). Note that measure [Percentage Top 2 Boxes] that is on Columns (2) is defined as follows.

Figure 8 -- Calculated field for determining the percentage of people that selected the top 2 boxes.

Figure 8 — Calculated field for determining the percentage of people that selected the top 2 boxes.

Why >=3?  It turns out that the Likert scale for this data went from 0 to 4, so here we just want to add up everyone who selected a 3 or a 4.

Not Quite Ready to Rock and Roll

This calculated field will work for many of the visualizations we might want to create, but it won’t work for the scatterplot and it will give us some headaches when we attempt to add some discrete measures to the header that surrounds our chart (the % Diff text that appears to the left of the dot plot in Figure 4.) So, instead of having a single calculation I created two separate calculations to compute % top 2 boxes Importance and % top 2 boxes Satisfaction. The calculation for Importance is shown in Figure 9.

Figure 9 -- Calculated field for determining the percentage of folks that selected the top two boxes for Importance.

Figure 9 — Calculated field for determining the percentage of folks that selected the top two boxes for Importance.

Notice that we have all the rows associated with both the Importance questions and Satisfaction “in play”, as it were, but we’re only tabulating results for the Importance questions so we’re dividing by half of the total number of records.

We’ll need to create a similar calculated field for the Satisfaction questions.

Ready to Rock and Roll

Understanding the Dot Plot

Figure 10 shows what drives the Dot Plot (we’ll add the connecting line in a moment.)

Figure 10 -- Dissecting the Dot Plot.

Figure 10 — Dissecting the Dot Plot.

Here we see that we have a Shape chart (1) that will display two different Measure Values (2) and that Measure Names (3) is controlling Shape and Color.

Creating the Connecting Line Chart

Figure 11 shows how the Line chart that connects the shapes are built.

Figure 11 -- Dissecting the Line chart

Figure 11 — Dissecting the Line chart.

Notice that Measure Values is on Rows a second time (1) but the second instance the mark type is a Line (2) and that the end points are connected using the Measure Names on the Path (3).  Also notice that there is no longer anything controlling the Color as we want a line that is only one color.

Combining the Two Charts

The only thing we need to do now is combine the two charts into one by making a dual axis chart, to synchronize the secondary axis, and hide the secondary header (Figure 12.)

Figure 12 -- the Completed connected Dot Plot.

Figure 12 — the Completed connected Dot Plot.

What to Look for in the Dashboard

Any chart that answers a question usually fosters more questions. Consider the really big gap in Ability to Customize UI. Did all respondents indicate this, or only some?

And if one group was considerably more pronounced than others, what were the actual responses across the board (vs. just looking at the percent top 2 boxes)?

Figure 13 -- Getting the details on how one group responded

Figure 13 — Getting the details on how one group responded

The dashboard embedded below shows how you can answer these questions.

Got another approach that you think works better?  Let me know.

Apr 252017
 

April 25, 2017

Overview

I became a big fan of adding a marginal histogram to scatterplots when I first saw them applied in Tableau visualizations from Shine Pulikathara and Ben Jones.

For those not familiar with how these work, consider the scatterplot shown in Figure 1 that shows the relationship between salary and age.

Figure 1 -- Comparing Age and Salary on a scatterplot.

Figure 1 — Comparing Age and Salary on a scatterplot

Some interesting things here; for example,  we can see that salaries appear to be highest between ages 50 and 55 and lowest among the youngest and older workers.

But look what happens when we add marginal histograms to the x and y axes (Figure 2.)

Figure 2 -- Scatterplot with marginal histogram

Figure 2 — Scatterplot with marginal histogram

Whoa! The two bar charts to the right and below the main chart add a lot of insight into the data.  We don’t just see the correlations, but now we can also see age demographics and salary distribution in the organization.

Marginal Histograms and Jitterplots

The marginal histogram works with other visualizations as well. Consider the dot plot with jitter (jitterplot) example from Lean management tool innovator LeanKit in Figure 3.

Figure 3 -- Individual and aggregate vies of important data from LeanKit

Figure 3 — Individual and aggregate vies of important data from LeanKit

The combination of the individual data points (the jittered dots that represent Kanban cards) and the aggregated data (stacked bar charts) tells a more complete story than having only the aggregation or only the individual dots.

Marginal Histograms and Highlight Tables

Readers of this blog know I like highlight tables and often use them as a “visualization gateway drug” to move people from cross tabs to more insightful ways of looking at their data.

But as great as they are, they do not lend themselves to accurate comparisons of the data. Consider Figure 4 where we see the percentage of sales broken down by region.

Figure 4 -- Sorted highlight table showing percentage of sales by sub-category and region

Figure 4 — Sorted highlight table showing percentage of sales by sub-category and region

Yes, I can see that Phones in the East is a lot darker than Copiers in the West, but without the numbers there’s no way to could do an exact comparison as I don’t know of anyone that can look at just the color coding and exclaim “ah, that cell is twice as blue as that other cell.”

But look what happens when we add the marginal histogram to the visualization, as shown in Figure 5.

Figure 5 -- Sorted highlight table with marginal histograms. Here we see percentage of sales.

Figure 5 — Sorted highlight table with marginal histograms. Here we see percentage of sales.

So much added insight, and so little added screen real estate!

I’ll confess that the histograms don’t work quite as well if you have negative values. Here’s what it looks like if we look at percentage of profit broken down by sub-category and region.

Figure 6 -- Sorted highlight table with marginal histograms. Here we see percentage of profit.

Figure 6 — Sorted highlight table with marginal histograms. Here we see percentage of profit.

Because we have bars pointing in different directions for the histogram on the right the look isn’t quite as clean, but it certainly works.

See for Yourself

I’ve included an embedded dashboard below where you can experiment with different metrics and different sorting choices. Feel free to download and “look under the hood.”

Note that making this type of dashboard is not very difficult; the only tricky part is getting the three elements to align properly. Ben Jones gets into those particulars in his blog post.

 

Apr 052017
 

More thoughts on the Markimekko chart and in particular how to build one in Tableau.

April 4, 2017

Overview

Given my reluctance to embrace odd chart types and my conviction that I would find something better I was surprised to find myself last month writing about — and endorsing — the Marimekko chart.

If I was surprised then I’m absolutely gobsmacked to be writing about it again.

What precipitated all this was another very good example of the chart in the wild. After admiring it I couldn’t help but “look under the hood” (hey, we are talking about Tableau Public and people sharing this stuff freely) and I thought that the dashboard designer was working harder than he needed to build the visualization.

So, if people are going to use these things I thought I would share an alternative, and I think easier, technique for building them.

The Great Example from Neil Richards

Here’s the terrific Makeover Monday dashboard from Neil Richards where we see the likelihood of certain jobs being replaced by automation.

01_Neil

Neil does a great job highlighting some of the more interesting findings, but if you want to know more than what Neil highlights you’ll need to explore the dashboard on your own.

Notice that in both this case and in Emma Whyte’s we are dealing with only two data segments; e.g., male vs. female and at-risk vs. not at-risk jobs. Having only two colors is one of the main reasons why the chart works well.

Okay! Uncle! I agree that under the right conditions this is a useful chart and I can see what you may want to make one.

But is there an easier way to make one?

An Easier Way to Create a Markimekko Chart in Tableau

It turns out the same technique Joe Mako showed me six years ago for building a divergent stacked bar chart works great for fashioning a Markimekko.  Let’s see how to do this using Superstore data with fields similar to what was available in both Emma and Neil’s dashboards.

Let’s say I want to compare the magnitude of sales with the profitability of items by region.  Figure 2 shows the overall magnitude of sales but makes comparing profitability difficult.

Figure 2 -- Overall sales is easy to see but comparing profitability across regions is difficult.

Figure 2 — Overall sales is easy to see but comparing profitability across regions is difficult.

Here’s another attempt using a 100% stacked bar chart.

Figure 3 -- Showing profitability with a 100% stacked bar chart.

Figure 3 — Showing profitability with a 100% stacked bar chart.

Yes, this does a much better job allowing us to compare the profitability of each region, but there’s no way to easily glean that Sales in the West is almost double sales in the South (which is easy to do in Figure 2.)

So, how can we make the regions that have large sales be wide and the regions that have small sales be  narrow?

Understanding the Fields

Before going much further let’s make sure we understand the following three fields:

  • Percentage Profitable Sales
  • Percentage Unprofitable Sales
  • Sales Percentage of
[Percentage Profitable Sales]

This is defined as

SUM(IF [Profit]>=0 THEN [Sales] END)/SUM(Sales)

… and translates as “if the profit for an item within a partition is profitable, add it up, then divide by the total sales within the partition.”

This is the field that gives us the 90%, 77%, 76%, and 72% results shown in Figure 3.

[Percentage Unprofitable Sales]

This is defined as

1 - [Percentage of Profitable Sales]

… and gives us the 10%, 23%, 24%, ad 28% shown in Figure 3.

[Sales Percentage of]

This is defined as

SUM([Sales]) /TOTAL(SUM([Sales]))

… and we will use it to compute the percentage of sales across the four regions (i.e., show me the sales for one region divided by the sales for all the regions). Here’s how we might use it in a visualization.

Figure 4 -- Using the calculation to figure out how wide each region should be.

Figure 4 — Using the calculation to figure out how wide each region should be.

So, in Figure 4 we can see that the West segment is a lot thicker than the South segment.

How can we apply this additional depth to what we had in Figure 3?

Make it Easy to See if the Math is Correct

At this point it will be helpful to see the interplay of the various measures and dimensions using a cross tab like the one shown in Figure 5.

Figure 5 -- Cross tab showing the relationship among the different measures and dimensions.

Figure 5 — Cross tab showing the relationship among the different measures and dimensions.

The first four columns are easy to interpret:

“I see that sales in the West is $725,458 of which 10% is unprofitable and 90% is profitable.  That $725,458 represents 31.6% of the total sales.”

But how is the field called [Start at] defined and how are we going to use it?

Understanding [Start at]

[Start at] is defined as

PREVIOUS_VALUE(0)+ZN(LOOKUP([Sales Percentage of],-1))

This is the calculation that figures out where the mark should start while [Sales Percentage of] will later determine how thick the mark should be.  Let’s see how this all works together.

Figure 6 -- How [Start at] and [Sales Percentage of] will work together.  Note that “Compute Using” for the two table calculations is set to [Region].

Figure 6 — How [Start at] and [Sales Percentage of] will work together.  Note that “Compute Using” for the two table calculations is set to [Region].

For the West region we want to start at 0% and have a bar that is 31.6% units side. The function

PREVIOUS_VALUE(0)

Tells Tableau to look at whatever is the value for [Sales at] for the row above and if there is no row above make the value 0 (see Item 1 in Figure 6, above.)

Add to this the value for [Sales Percentage of] in the previous row (Item 2 which is also not present) and you get 0 + 0 (Item 3).

For the East region we want to start wherever West left off (Item 3 plus Item 4, which gives us item 5) and make the mark 29.5% wide (item 6).

For the Central region we want to start wherever the previous region left off (Item 5 plus item 6, which gives us item 7) and make the mark 21.8% wide (Item 8).

Let’s see how this all fits together into the Marimekko visualization in Figure 7.

Figure 7 -- Using [Start at ] and [Sales Percentage of] to make the Marimekko work.

Figure 7 — Using [Start at ] and [Sales Percentage of] to make the Marimekko work.

There are three things to keep in mind.

  1. [Start at] is on columns and determines the starting point (how far to the right) for each of the regions.
  2. [Sales Percentage of] is on Size and determines how thick the bars should be.
  3. Size is set to Fixed width, left aligned, where Fixed means the measure on the Size shelf is determining the thickness.
Figure 8 -- Size must be fixed and left-aligned.

Figure 8 — Size must be fixed and left-aligned.

Some Interesting Findings

I built a parameter-driven version of the Marimekko (embedded at the end of this blog post) that allows the viewer to select different dimensions and different ways to sort. Here’s what happens when we look at Sub-Category sorted by Profitability.

Figure 9 -- Profitability by Sub-Category.

Figure 9 — Profitability by Sub-Category.

Okay, not a big surprise here given how many visualizations we’ve all seen showing that Tables are problematic.

That said, I was in for a surprise when I broke this down by state and sorted by the magnitude of sales, as shown below.

Figure 10 -- Profitability by state, sorted by Sales.

Figure 10 — Profitability by state, sorted by Sales.

Wow, after 11 years of living with this data set I never realized that 60% of the items sold in Texas were unprofitable.  Who knew?

To be honest I’m not convinced we need a Marimekko to see this clearly.  A simple sorted bar chart will do the trick, as shown in Figure 11.

Figure 11 -- Sorted bar chart.

Figure 11 — Sorted bar chart.

Indeed, I think this very simple view is better than the Marimekko in many respects.

I guess it depends what you’re trying to get across.

See for Yourself

I’ve included an embedded workbook that has the Superstore example as well as versions of the visualizations Emma Whyte and Neil Richards built, but using this alternative technique.

I encourage you to think long and hard before deploying a Marimekko.  But if you do decide to build one I hope the techniques I explored here will prove useful.

 

Mar 202017
 

Or

How I stopped worrying and learned to love appreciate the Marimekko

March 19, 2017

Overview

Readers of my blog know that I suffer from what Maarten Lambrechts calls xenographphobia, the fear of unusual graphics.  I’ll encounter a chart type that I’ve not seen before, purse my lips, and think (smugly) that there is undoubtedly a better way to show the data than in this novel and, to me, unusual chart.

That was certainly my reaction to “Marimekko Mania” when Tableau 10.0 was first released. I didn’t see a solid use case for this chart. There were some wonderful blog posts from Jonathan Drummey and Bridget Cogley on the subject, but I just wasn’t buying the need for the chart type.

Note: It turns that for many situations you can make a perfectly fine Marimekko just using table calculations. I’ll weigh in on this later.

Enter Emma Whyte and Workout Wednesday

My “I’ll never need to use that” arrogance was disrupted a few weeks ago when I read this blog post from Emma Whyte.  The backstory is that Emma reviewed a Junk Charts makeover of a Wall Street Journal graphic, really liked the makeover, and decided to recreate it in Tableau.

Here’s the Wall Street Journal graphic.

Figure 1 -- Source of inspiration for Junk Charts  and Emma Whyte. From a 2016 survey by LeanIn.org and McKinsey & Co.

Figure 1 — Source of inspiration for Junk Charts  and Emma Whyte. From a 2016 survey by LeanIn.org and McKinsey & Co.

There are two important things the data is trying to tell us:

  1. The percentage of women decreases, a lot, the higher up you go in the corporate hierarchy; and,
  2. There are far more entry-level positions than there are managers than there are VPs, etc.

The chart does a good job on the first point but only uses text to covey the second point.

Contrast this with Emmy Whyte’s visualization:

Figure 2 -- Emma Whyte's makeover.

Figure 2 — Emma Whyte’s makeover.

Whoa.

I immediately “grokked” this.  There are way more men than women among VPs, Senior VPs, and in the C-Suite, but look how much narrower those bars are!  True, I cannot easily compare how much wider the Entry Level column is than the VP column, but is that really important?

Is the Marimekko in fact the “right” way to show this?

Being a little bit stubborn I was not ready to declare a Marimekko victory so I decided to see if I could build something that worked as well, if not better, using more common chart types.

Anything You Can Do, I Can Do…

I won’t go through all ten iterations I came up with but I will show some of my attempts to convey the data accurately and with the visceral wallop I get from Emma’s makeover.

100% Stacked Bar with Marginal Histogram

Putting a histogram in the margin has become a “go to” technique when I’m dealing with highlight tables and scatterplots so I thought that might work in this situation. Here’s a 100% stacked bar chart combined with a histogram.

Figure 3 -- 100% stacked bar with marginal histogram. 

Figure 3 — 100% stacked bar with marginal histogram.

I was so convinced this would just smoke the Marimekko. I mean just look how easy it is to make accurate comparisons!

That may be true, but I think the Marimekko in question does a better job.

Connected Dot Plot

Here’s another attempt using a connected dot plot.

Figure 4 -- Connected dot plot where the size of the circles reflects the percentage of the workforce.

Figure 4 — Connected dot plot where the size of the circles reflects the percentage of the workforce.

Here the lines separating the circles show the gender gap and the size of the circles reflects the percentage of the workforce.

OK, I think the gap is well represented but the spacing between job levels is a fixed width.  In my pursuit of accuracy I needed to find a way spread the circles based on percentage of the workforce.

Diverging Lines with Bands

Figure 5 shows two diverging lines with circles and bands that are proportionate to the percentage of the workforce (Entry level is 52 units wide, Manager is 28 units wide, and so on).

Figure 5 -- Diverging lines with dots and correctly-sized circles and bands

Figure 5 — Diverging lines with dots and correctly-sized circles and bands

But why are the lines sloping?  Shouldn’t the lines be flat for each job level?

Flat Lines

Here’s a similar approach but where the lines stay flat for each job level.

Figure 6 -- Flat lines and accurate circles and bands.

Figure 6 — Flat lines and accurate circles and bands.

More Approaches and the Graphic from the Actual Report

All told I made ten attempts.  The calculation I came up with for Figure 5 also made it possible to create a Markimekko just using a simple table calculation.

Note: I asked Jonathan Drummey to have a look at the Marimekko-with-table-calc approach and he points out that in both my example and Emma Whyte’s example the data isn’t “dense” so you can break the visualization simply by right-clicking a mark and selecting Exclude. That said, the technique is fine for static images and dashboards where you disable the Exclude functionality.

I also reviewed the full Women in the Workplace report and saw they used an interesting pipeline chart to relate the data.

Figure 7 -- "Pipeline" chart from Women in Workplace report (LeanIn.Org and McKinsey & Co.)

Figure 7 — “Pipeline” chart from Women in Workplace report (LeanIn.Org and McKinsey & Co.)

I applaud the creativity but have a lot of problems with the inaccurate proportions. Notice that this chart also has a sloping line suggesting a continuous decrease as you go from one level to another.

And The Winner is…

For me, Emma Whyte’s Marimekko does the best job of showing the data in a compelling and accurate format and I thank Emma for presenting such a worthwhile example.

Will I use this chart type in my practice?

It depends.

If the situation calls for it, I would try it along with other approaches and see what works best for the intended audience.

Here’s a link to the Tableau workbook that contains a copy of Emma Whyte’s original approach and many of my attempts to improve upon it. If you come up with an alternative approach that you think works well, please let me know.

Postscript

Big Book of Dashboards co-author Jeff Shaffer encouraged me to make some more attempts. Here’s a work in progress using jittering.

Jitter with bands

I think this looks promising.

Mar 082017
 

And… what did they choose?

March 8, 2017

Overview

I’ve discussed how to visualize check-all-that-apply questions in Tableau. Assuming your survey is coded as Yes = 1 and No = 0, you can fashion a sorted bar chart like this the one shown in Figure 1 using the following calculation.

SUM([Value]) / SUM(Number of Records)

The field [Value] would be 0 or 1 for each respondent that answered the question.

Figure 1 -- Visualizing a check-all-that-apply question

Figure 1 — Visualizing a check-all-that-apply question

I’ve also discussed how we can see break this down by various demographics (Gender, Location, Generation, etc.)

What I’ve not blogged about (until now) is how to answer the following questions:

  • How many people selected one item?
  • Two items?
  • Five items?
  • Of the people that only selected one item, what did they select?
  • Of the people that selected four items, what did they select?

Prior to the advent of LoD calculations this was doable, but a pretty big pain in the ass.

Fortunately, using examples that are “out in the wild” we can cobble together a compelling way to show the answers to these questions.

 

Visualizing How Many People Selected 1, 2, 3, N Items?

One of the best blog posts on Level-of-Detail expressions is Bethany Lyons’ Top 15 LoD Expressions.

It turns out the very first example discusses how figure out how many customers placed one order, how many placed two orders, etc.  This will give us exactly what we need to figure out how many people selected 1, 2, 3, N items in a check-all-that-apply question.

Here’s the calculation that will do the job.

Figure 2 -- The LoD calculation we'll need.

Figure 2 — The LoD calculation we’ll need.

This translates as “for the questions you are focusing on (and you better have your context filters happening so you are only looking at just the check-all-that-apply stuff), for each Resp ID, add up the values for all the questions people answered.”

Remember, the responses are 0s and 1s, so if somebody selected six things the SUM([Value]) would equal 6.

So, how do we use this?

The beautiful thing about using FIXED as our LoD keyword is that it allows us to turn the results into a dimension.  This means we can put How Many Selected on columns and CNTD(Resp ID) on rows and get a really useful histogram like the one shown in Figure 3.

Figure 3 -- Histogram showing number of respondents that selected 0 items, 1 item, 2 item, etc.

Figure 3 — Histogram showing number of respondents that selected 0 items, 1 item, 2 item, etc.

Notice the filter settings indicating that we only want responses to the check-all-that-apply questions. Further note that this filter has been added to the context which means we want Tableau to filter the results before computing the FIXED LoD calculation.

 

So, what did these People Select?

Okay, now we know how many people selected one item, two items, etc.

Just what did they select?

Because we set [How Many Selected] using the FIXED keyword we can use it like any other dimension.  That is, it will behave just like [Gender], [Location], and so on.

Borrowing from an existing technique (the visual ranking by category that I cited earlier) we can fashion a very useful dashboard that allows us to see some interesting nuances in the data.  For example, while Metabolism is ranked second overall with 70% of people selecting it, it ranked seventh among those that only selected one item (with only 4%), while 84% of people that selected four items selected it (Figure 4.)

Figure 4 -- Metabolism is ranked second overall with 70%, but only 4% of folks that chose one item selected it.

Figure 4 — Metabolism is ranked second overall with 70%, but only 4% of folks that chose one item selected it.

Similarly, check out the breakdown for Blood Pressure which is ranked third with 60% overall but is ranked first among folks that only measure one thing (Figure 5.)

Figure 5 -- Metabolism is ranked third overall with 60%, but was ranked first among those that only selected on item.

Figure 5 — Metabolism is ranked third overall with 60%, but was ranked first among those that only selected on item.

 

Other Useful Features of the Dashboard

The Marginal Histogram

The marginal histogram along the bottom of the chart shows you the breakdown of responses.

Figure 6 -- Marginal histogram shows distribution of responses

Figure 6 — Marginal histogram shows distribution of responses

Tool Tips Help Interpret the Findings

The ordinal numbers can be confusing as sometimes the number 2 means the number of items selected and other times it is the ranking.  Hovering over a bar explains how to interpret the results.

Figure 7 -- Tool tips help you interpret the results.

Figure 7 — Tool tips help you interpret the results.

Swap Among Different Dimensions

While this is first and foremost a blog post about showing how many people selected a certain number of items (and what they selected) it was very easy to add a parameter that allows you to swap among different dimensions.  In Figure 8 we see the break down by Location.

Figure 8 -- Use the Break Down by parameter to see rank and magnitude for the selected item among different dimensions, in this case Location.

Figure 8 — Use the Break Down by parameter to see rank and magnitude for the selected item among different dimensions, in this case Location.

Here’s the embedded workbook for you to try out and download.

Feb 222017
 

February 22, 2017

Overview

Earlier this week Gartner, Inc. published its “Magic Quadrant” report on Business Intelligence and Analytics (congratulations to Tableau for being cited as a leader for the fifth year in a row).

Coincidentally, this report came on the heels of one of my clients needing to create a scatterplot where there were four equally-sized quadrants even though the data did not lend itself to sitting in four equally-sized quadrants.

In this blog post we’ll look at the differences between a regular scatterplot  and a balanced quadrant scatterplot, and show how to create a self-adjusting balanced quadrant scatterplot  in Tableau using level-of-detail calculations and hidden reference lines.

The Gartner Magic Quadrant

Let’s start by looking at an example of a balanced quadrant chart.

Here’s the 2017 Gartner Magic Quadrant chart for Business Intelligence and Analytics.

Figure 1 -- 2017 Gartner Magic Quadrant for Business Intelligence and Analytics

Figure 1 — 2017 Gartner Magic Quadrant for Business Intelligence and Analytics

Notice that there aren’t measure numbers along the x-axis and y-axis so we don’t know what the values are for each dot.  Indeed, we don’t know how high and low your “Vision” and “Ability to Execute” scores need to be to fit into one of the four quadrants. We just know that anything above the horizontal line means a higher “Ability to Execute” and anything to the right of the vertical line means a higher “Completeness of Vision.”  That is, we see how the dots are positioned with respect to each other versus how far from 0 they are. Indeed, you could argue that the origin (0, 0) could be the dead center of the graph as opposed to the bottom left corner.

This balanced quadrant is attractive and easy to understand. Unfortunately, such a well-balanced scatterplot rarely occurs naturally as you will rarely have data that is equally distributed with respect to a KPI reference line.

A Typical Scatterplot with Quadrants

Consider Figure 2 below where we compare the sum of Sales on the x-axis with the sum of Quantity on the Y-Axis. Each dot represents a different customer.

Figure 2 -- Scatterplot comparing sales with quantity where each dot represents a customer.

Figure 2 — Scatterplot comparing sales with quantity where each dot represents a customer.

Now let’s see what happens if we add Average reference lines and color the dots relative to these reference lines.

 

Figure 3 -- Scatterplot with Average reference lines.

Figure 3 — Scatterplot with Average reference lines.

I think this looks just fine as it’s useful to see just how scattered the upper right quadrant is and just how tightly clustered the bottom left quadrant is. That said, if the values become more skewed it will become harder to see how the values fall into four separate quadrants and this is where balancing the quadrants can become very useful.

Note: The quadrant doesn’t have to be based on Average. You can use Median or any calculated KPI.

“Eyeballing” what the axes should be

We’ll get to calculating the balanced axes values in a moment but for now let’s just “eyeball” the visualization and hard code minimum values for the x and y axes.

Let’s first deal with the x-axis.  The maximum value looks to be around $3,000 and the average is at around $500 so the difference between the average line and maximum is around $2,500.

We need the difference between the average line and minimum value to also be $2,500 so we need to change the x-axis so that it starts at -$2,000 instead of 0.

Applying the same approach to the y-axis we see that the maximum value is around 34 and the average is around 11 yielding a difference of 23 (34 -11).  We need the y-axis to start at 23 units less than the average which would be -12 (11 – 23).

Here’s what the chart looks like with these hard-coded axes.

Figure 4 -- Balanced quadrants using hard-coded axes values.

Figure 4 — Balanced quadrants using hard-coded axes values.

If we ditch the zero lines we’ll get a pretty good taste of what the final version will look like.

Figure 5 -- Balanced quadrants with zero lines removed.

Figure 5 — Balanced quadrants with zero lines removed.

So, this works… in this one case. But what happens if we apply different filters?

We need to come up with a way to dynamically adjust the axes and we can in fact do this by adding hidden reference lines that are driven by level-of-detail calculations.

Adding Reference Lines

We need to come up with a way to calculate what the floor value should be for the x-axis and the y-axis.  The pseudocode for this is:

Figure out what the maximum value is and subtract the average line value, then, starting from the average line, subtract the difference we just computed.

Applying a little math, we end up with this:

-(Max Value) + (2*Average Value)

Let’s see if that passes the “smell” test for the y-axis.

-34 + (2*11) = -12

Now we need to translate this into a Tableau calculation.  Here’s the calculation to figure out the y-axis reference line.

Figure 6 -- Formula for determining the y-axis reference line.

Figure 6 — Formula for determining the y-axis reference line.

And here’s the same thing for the x-axis:

Figure 7 -- Formula for determining the x-axis reference line

Figure 7 — Formula for determining the x-axis reference line.

Now we need to add both calculations onto Detail and then add reference lines as shown below.

Figure 8 -- Adding the x-axis reference line. Notice that the line is currently visible. Further note that we could be using Max or Min instead of average as the value will stay the same no matter what.

Figure 8 — Adding the x-axis reference line. Notice that the line is currently visible. Further note that we could be using Max or Min instead of average as the value will stay the same no matter what.

Here’s what the resulting chart looks like with the zero lines and reference lines showing.

Auto adjusting with reference lines

Figure 9 — Auto-adjusting balanced quadrant chart with visible reference lines and zero lines. The reference lines force the “floor” value Tableau uses to determine where the axes should start.

Hiding the lines, ditching the tick marks, and changing the axes labels

Now all we need to do is attend to some cosmetics; specifically, we need to format the reference lines so there are no visible lines and no labels, as shown in Figure 10.

Figure 10 -- Hiding lines and labels

Figure 10 — Hiding lines and labels

Then we need to edit the axes labels and hide the tick marks as shown in Figure 11.

Figure 11 -- Editing the axes labels and removing tick marks.

Figure 11 — Editing the axes labels and removing tick marks.

This will yield the auto-adjusting, balanced quadrant chart we see in Figure 12.

Figure 12 -- The completed, auto-adjusting balanced quadrant chart.

Figure 12 — The completed, auto-adjusting balanced quadrant chart.

Other Considerations

What happens if instead of the values spreading out in the upper right we get values that spread out in the bottom left?  In this case we would need to create a second set of hidden reference lines that force Tableau to draw axes that extend further up and to the right.

Also note that since we are using FIXED in our level-of-detail calculations we need to make sure any filters have been added to context so Tableau processes these first before performing the level-of-detail calculations.

Could I have used a table calculation instead of an LoD calc? I first tried a table calculation and ran into troubles with trying to specify an average for one aspect of the calculation and a maximum for another aspect using the reference line dialog box. I may have given up too early but got tired of fighting to make it work.

Note: Jonathan Drummey points out that we can in fact use INCLUDE instead of FIXED here so we would not have to use context filters. If you go this route make sure to edit the feeder calcs for the KPI Dots field ([Quantity  — Windows Average LoD] and [Sales  — Windows Average LoD]) so these use INCLUDE as well.

Give it a try

Here’s a dashboard that allows you to compare a traditional scatterplot with reference lines with the auto-adjusting balanced quadrant chart.  Feel free to download and explore.

 Posted by on February 22, 2017 1) General Discussions, Blog Tagged with: , , , ,  5 Responses »
Feb 152017
 

Overview

Prior to working the last two years with Jeffrey Shaffer and Andy Cotgreave on the upcoming The Big Book of Dashboards I tended to look at BANs — large, occasionally overstuffed Key Performance Indicators (KPIs) —  as ornamental rather than informational.  I thought they just took up space on a dashboard without adding much analysis.

I’ve changed my mind and now often recommend their use to my clients.

In this blog post we’ll see what BANs are and why they can be so useful.

Examples of BANs

Here are several examples of dashboards featured in The Big Book of Dashboards, all of which use BANs.

Figure 1 -- Complaints dashboard by Jeffrey Shaffer

Figure 1 — Complaints dashboard by Jeffrey Shaffer

Figure 2 -- Agency Utilization dashboard by Vanessa Edwards

Figure 2 — Agency Utilization dashboard by Vanessa Edwards

Figure 3 -- Telecom Operator Executive dashboard by Mark Wevers / Dundas BI

Figure 3 — Telecom Operator Executive dashboard by Mark Wevers / Dundas BI

Why these BANs work

The BANs in these three dashboards are useful in that they provide key takeaways, context, and clarification. Let’s see how they do these things.

Key takeaways

If you had to summarize the first dashboard in a just a few words, how would you do that? The BANs shown in Figure 4 get right to the point.

Figure 4 -- Concise complaints summary (we'll discuss the colors in a moment).

Figure 4 — Concise complaints summary (we’ll discuss the colors in a moment).

The same can be said of the BANs in the Agency Utilization dashboards. By looking at the first two BANs in Figure 5, we can see that the agency made $3.8 million but could make $3.4 million more if it were to meet its billable goals.  That is the most important takeaway and it’s presented in big, bold numbers right at the top of the dashboard.

Figure 5 -- If we had to distill this entire dashboard down to one key point, it would be that the current sales are $3.8M but they could be $3.4M more.

Figure 5 — If we had to distill this entire dashboard down to one key point, it would be that the current sales are $3.8M but they could be $3.4M more.

Context

The Three BANs in the Telecom Operator Executive dashboard (Figure 3) not only provide key takeaways but also provide context for the charts that appear to the right of each BAN.  Consider the strip shown in Figure 6 which starts with the proclamation that ARPU (Average Revenue Per User) is $68.

Figure 6 -- What contributes to ARPU being $68, and how does prepaid ARPU compare to postpaid?

Figure 6 — What contributes to Postpaid ARPU being $68?

The images to the right explain everything that goes into making the $68 (Comparison of Postpaid to Prepaid, Voice, Data, Addons breakdown, etc.)

Note that the dashboard designer packs a lot of very useful information into the box that surrounds the BAN; specifically, ARPU is up $6 YTD, but is down in Q4 compared to Q3 (that’s how to interpret the line atop the shaded bars).

Clarification

The BANs in Figures 1 and 3 aren’t just conversation starters /  key takeaways, they are also color legends that clarify the color coding throughout the dashboard.

Consider the Complaints dashboard; the BANs indicate that Closed is teal and Open is red. Armed with this knowledge we know exactly what to make of the chart in Figure 7.

Figure 7 -- Everything prior to November 2016 is closed and only a handful of things in November are open.

Figure 7 — Everything prior to November 2016 is closed and only a handful of things in November are open.

The same goes for the Agency Utilization dashboard. The BANs inform us that blue represents Fees and green represents Potential so I know exactly how to interpret bars that are those colors when I look at a chart like the one shown in Figure 8.

Figure 8 -- Because the BANs told me how to interpret color, we can see that for Technology the company billed $883K but could bill an additional $1,762K if it were to hit its targets.

Figure 8 — Because the BANs told me how to interpret color, we can see that for Technology the company billed $883K but could bill an additional $1,762K if it were to hit its targets.

Conclusion

BANs can do a lot to help people understand key components of your dashboard: they can be conversation starters (and finishers), provide context to adjacent charts, and serve as a universal color legend.

Note: While I’ve tried to show how effective BANs can be I did not address how a particular font can help / hurt your BAN initiative. 

The Big Book of Dashboards co-author Jeff Shaffer has been studying font use and has a fascinating take on the new fonts Tableau added to their product this past year. You can read about it here.

 Posted by on February 15, 2017 1) General Discussions, Blog Tagged with: , , , ,  2 Responses »