Do visualizations need to be “accurate”?

by Enrico on May 22, 2011

in Thoughts

A few days ago I was re-reading the mythical paper Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods by Cleveland and McGill. The study in the paper (and some few ones that followed) forms the basis for one of the most important pieces of theory we have in visualization: the ranking of visual variables.

The ranking of visual variables is nothing too complicated or esoteric. It states one simple thing: some visual features are better than others in carrying quantitative information. For instance, a numerical quantity can be represented visually through the position of a dot on a x-y axis, the length of a bar, or the size of a circle. Which one is best? Position comes first, then length, and finally size. That’s basically it.

My new read was a revelation. The more I read it the more I was surprised to see how the actual content diverged from what I recalled from my first read several years before. For instance, I was surprised to see that only a very small set of the variables in the ranking was actually tested empirically. And also that color, among those not tested, scored so badly (at the bottom of the rank)!

Why? I don’t know. Another idea I had in mind was that the visual variables were tested also in terms of how fast we can judge them, but the paper is in fact totally based on accuracy judgments. And this last thing really hit me like a punch in the stomach. But not that much because I am a big fan of efficiency, but because the way it was done seemed so limited to me that I really think we need to talk about it. But let me recall the study first.

The Study.

Cleveland and McGill tested the elementary perceptual tasks (this is the way they called the visual variables) by showing alternative charts of the same data to a number of subjects. Each subject had to answer the following question: “what percentage the smaller [graphical element] is of the larger“?

These are the charts used in the first experiment (the subject compared the elements marked with a dot):

The second experiment compared a pie chart to a bar chart (which, by the way, is probably the reason why pie charts have such a bad reputation today):

What’s the problem with that? Accuracy and the specific kind of accuracy they tested.

Accuracy and the conceptual trap.

The first question that came into my mind was: if visualization is meant to direct the eye towards patterns and trends, why do we judge its effectiveness in terms of how people extract precise numbers out of it?

One fundamental lesson I learned from Stephen Few‘s Show Me the Numbers, is that when numerical quantities are an important factor in the analysis, tables and text are much more effective than graphs. Why do we measure charts in terms of accuracy and base the whole theory of charts design on top of it then?

The funny thing, which few people noticed I guess, is that Cleveland and McGill actually provide an answer to this question in the paper!

One must be careful not to fall into a conceptual trap by adopting accuracy as a criterion. We are not saying that the primary purpose of a graph is to convey numbers with as many decimal places as possible. We agree with Ehrenberg (1975) that if this were the only goal, tables would be better. The power of a graph is its ability to enable one to take in the quantitative information, organize it, and see patterns and structure not readily revealed by other means of studying the data. Our premise, however, is this:

A graphical form that involves elementary perceptual tasks that lead to more accurate judgments than another graphical form (with the same quantitative in formation) will result in better organization and increase the chances of a correct perception of patterns and behavior.

However, to me it looks like the studies do not match the question posed by the theory. If the theory states that more accurate visualizations lead to “better organization and increase the chances of a correct perception of patterns and behavior” why didn’t they test for it? The studies demonstrate that people can better judge the relative size of elementary visual features when taken in isolation, not that when better features are used in the graph the whole graph is more accurate.

Another doubt I have is the specific kind of accuracy they tested. While I certainly agree that accuracy is very important in visualization design and that most likely more accurate mappings DO lead to more accurate graphs, my experience with visualization tells me a different story. We judge visual representations in gross terms, not on precise quantifications. I more often find myself thinking in terms of: “this is bigger than that”, “this comes first, then this, then that”, “this is a lot different than the rest” and so on. That is, gross evaluations, not precise quantification.

And if this is what matters I am not sure whether accuracy of judgment is the best way to go. More important is the capability of distinguishing differences regardless their size or accuracy (to some extent), and even more the careful control of distracting effects and interactions originating from other visual variables.

For instance, my gut feeling is that the large majority of “bad” visualizations are not bad because they distort the values they represent (Brrr … can you see Edward Tufte behind me lifting a hammer over my head and ready to hit me as soon as this sentence ends?), even if this is bad practice. More problems stem in my humble opinion from their poor spatial organization, from the friction between the way they represent a concept and the user’s mental model, and from a poor understanding of how several visual factors interact.

What’s your experience? Does accuracy play a big role? If not what comes first?

Two examples where my intuition differs from the theory.

There are two visual features that we use over and over in visualization that seem to break the rules of the theory: color and size. There are so many visualization techniques that convey quantitative information through these two variables that one of the following must be true:

  1. either we can and should redesign all the techniques in terms of better performing visual variables
  2. or the theory is not correct
  3. or the theory does not explain everything

I personally think the third one is the truest. The theory is a very valid support for visualization design and we have to always keep it in mind, but visualization design is much more complex than that.

More importantly we have to ask ourselves a tough question: is the theory of graphical perception enough? And my short answer is no.

What can must we do?

Well … first of all until we have something better to show off let’s keep using this theory, because it’s the best things we have. Frankly, despite the large body of literature we have on perception applied to visualization (Colin Ware’s book above all), there are very few examples of theories with such a direct impact on design practices like this one.

As a second point, we have to realize that if the theory we would like to have is not there yet, WE have to make it happen! Yes, it’s time for us to understand that while visualization hits the public and becomes more mainstream, we have the responsibility to make it a more serious discipline.

There’s no way to escape here: either we continue to give vague advice made of rule of thumbs, guidelines, and tricks, or we have to take the responsibility to understand how it works first and then show it to other people. Without that we are all amateurs, regardless the long list of publications or developed products we have.

Previous post:

Next post: