Do visualizations need to be “accurate”?

A few days ago I was re-reading the mythical paper Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods by Cleveland and McGill. The study in the paper (and some few ones that followed) forms the basis for one of the most important pieces of theory we have in visualization: the ranking of visual variables.

The ranking of visual variables is nothing too complicated or esoteric. It states one simple thing: some visual features are better than others in carrying quantitative information. For instance, a numerical quantity can be represented visually through the position of a dot on a x-y axis, the length of a bar, or the size of a circle. Which one is best? Position comes first, then length, and finally size. That’s basically it.

My new read was a revelation. The more I read it the more I was surprised to see how the actual content diverged from what I recalled from my first read several years before. For instance, I was surprised to see that only a very small set of the variables in the ranking was actually tested empirically. And also that color, among those not tested, scored so badly (at the bottom of the rank)!

Why? I don’t know. Another idea I had in mind was that the visual variables were tested also in terms of how fast we can judge them, but the paper is in fact totally based on accuracy judgments. And this last thing really hit me like a punch in the stomach. But not that much because I am a big fan of efficiency, but because the way it was done seemed so limited to me that I really think we need to talk about it. But let me recall the study first.

The Study.

Cleveland and McGill tested the elementary perceptual tasks (this is the way they called the visual variables) by showing alternative charts of the same data to a number of subjects. Each subject had to answer the following question: “what percentage the smaller [graphical element] is of the larger“?

These are the charts used in the first experiment (the subject compared the elements marked with a dot):

The second experiment compared a pie chart to a bar chart (which, by the way, is probably the reason why pie charts have such a bad reputation today):

What’s the problem with that? Accuracy and the specific kind of accuracy they tested.

Accuracy and the conceptual trap.

The first question that came into my mind was: if visualization is meant to direct the eye towards patterns and trends, why do we judge its effectiveness in terms of how people extract precise numbers out of it?

One fundamental lesson I learned from Stephen Few‘s Show Me the Numbers, is that when numerical quantities are an important factor in the analysis, tables and text are much more effective than graphs. Why do we measure charts in terms of accuracy and base the whole theory of charts design on top of it then?

The funny thing, which few people noticed I guess, is that Cleveland and McGill actually provide an answer to this question in the paper!

One must be careful not to fall into a conceptual trap by adopting accuracy as a criterion. We are not saying that the primary purpose of a graph is to convey numbers with as many decimal places as possible. We agree with Ehrenberg (1975) that if this were the only goal, tables would be better. The power of a graph is its ability to enable one to take in the quantitative information, organize it, and see patterns and structure not readily revealed by other means of studying the data. Our premise, however, is this:

A graphical form that involves elementary perceptual tasks that lead to more accurate judgments than another graphical form (with the same quantitative in formation) will result in better organization and increase the chances of a correct perception of patterns and behavior.

However, to me it looks like the studies do not match the question posed by the theory. If the theory states that more accurate visualizations lead to “better organization and increase the chances of a correct perception of patterns and behavior” why didn’t they test for it? The studies demonstrate that people can better judge the relative size of elementary visual features when taken in isolation, not that when better features are used in the graph the whole graph is more accurate.

Another doubt I have is the specific kind of accuracy they tested. While I certainly agree that accuracy is very important in visualization design and that most likely more accurate mappings DO lead to more accurate graphs, my experience with visualization tells me a different story. We judge visual representations in gross terms, not on precise quantifications. I more often find myself thinking in terms of: “this is bigger than that”, “this comes first, then this, then that”, “this is a lot different than the rest” and so on. That is, gross evaluations, not precise quantification.

And if this is what matters I am not sure whether accuracy of judgment is the best way to go. More important is the capability of distinguishing differences regardless their size or accuracy (to some extent), and even more the careful control of distracting effects and interactions originating from other visual variables.

For instance, my gut feeling is that the large majority of “bad” visualizations are not bad because they distort the values they represent (Brrr … can you see Edward Tufte behind me lifting a hammer over my head and ready to hit me as soon as this sentence ends?), even if this is bad practice. More problems stem in my humble opinion from their poor spatial organization, from the friction between the way they represent a concept and the user’s mental model, and from a poor understanding of how several visual factors interact.

What’s your experience? Does accuracy play a big role? If not what comes first?

Two examples where my intuition differs from the theory.

There are two visual features that we use over and over in visualization that seem to break the rules of the theory: color and size. There are so many visualization techniques that convey quantitative information through these two variables that one of the following must be true:

  1. either we can and should redesign all the techniques in terms of better performing visual variables
  2. or the theory is not correct
  3. or the theory does not explain everything

I personally think the third one is the truest. The theory is a very valid support for visualization design and we have to always keep it in mind, but visualization design is much more complex than that.

More importantly we have to ask ourselves a tough question: is the theory of graphical perception enough? And my short answer is no.

What can must we do?

Well … first of all until we have something better to show off let’s keep using this theory, because it’s the best things we have. Frankly, despite the large body of literature we have on perception applied to visualization (Colin Ware’s book above all), there are very few examples of theories with such a direct impact on design practices like this one.

As a second point, we have to realize that if the theory we would like to have is not there yet, WE have to make it happen! Yes, it’s time for us to understand that while visualization hits the public and becomes more mainstream, we have the responsibility to make it a more serious discipline.

There’s no way to escape here: either we continue to give vague advice made of rule of thumbs, guidelines, and tricks, or we have to take the responsibility to understand how it works first and then show it to other people. Without that we are all amateurs, regardless the long list of publications or developed products we have.

11 thoughts on “Do visualizations need to be “accurate”?

  1. Dan Murray

    I think the theory is incomplete and that one can overcome inaccurate represnetation with groupings of complementary visualizations. Sometimes the client doesn’t require accuracy because they value a certain visual approach more than accuracy. While this can be undesirable there are usually means available to mitigate the issue in ways the client finds are acceptable.

  2. jakob

    Great read! I totally agree that the hypothesis of the study does not carry across all uses of visualization. But I don’t understand the point you make about the efficacy of color. I get that color can encode meaning, but quantitative display? Semantically there is a huge difference between color and size in that color unlike size (or a spatial relation of proximity for that matter) is NOT a concept of a continuum. There are very interesting conceptual phenomena revolving around the semantics of color, because different cultures have different thresholds and different numbers of basic color terms. So I would agree with the authors that color is no visual metaphor that is suited to accurately display quantitative differences even without a study to test this.

  3. Jorge Camoes

    I hate to use this example, but here it goes. In spite of what Tufte or Few say, there are as many articles proving that people can accurately (or “more accurately”) read pie charts as articles proving the opposite. And they can be right, all of them… Soooo, unless you can explain how pie charts work, you can’t have “a more serious discipline”. After all, it’s pie charts, how hard can it be?

    IMHO, if dataviz is also about aesthetics and emotional responses you can’t have “a more serious discipline”. You can try to build a more consistent one, and that’s good enough for me.

  4. Martin

    Hey Enrico,

    I agree with you that “we judge visual representations in gross terms, not on precise quantifications”. The relations you mentioned are the interesting we’re working with. If precise information is needed, mostly labels will help us. Further, I think its a kind of Shneiderman’s mantra: “Over view first, …, the details on-demand.” The accuracy is needed on higher detail levels and not at first sight. Maybe characteristics like the level of detail, the overall complexity of the representation or the parallel use of visuals are factors to judge, how accurate a visualization needs to be.

    As I’m working on a recommendation system for existing graphic representations, I try to formalize factual visualization knowledge from literature, e.g., the accuracy from Cleveland and McGill, to facilitate a ranking. But now, I’m not sure if I should use this theory or not. Or others… Thanks for open my eyes not to use “everything” without strongly thinking about it.


  5. Enrico Post author

    @Jan: I am thinking about it! There is so much to do in this area that i think we really need to takes some steps forward in this direction. Jeff Heer did a few things along these lines recently.

    @Dan: I do think accuracy is relevant but I also see how we have to learn to juggle between this an other needs one might have.

    @Jakob: Color CAN be used to represent quantitative values if a well designed and balanced color scale is used (there is a lot of research on that actually). Basically, as long as a scale is designed in a way to have a steady progression in value (brightness) it can be used to represent quantities. Heat Maps are a great example of how this can work. And it works pretty well in some situations! I hope it’s clearer now. But I agree, maybe in my post the distinction was not clear.

    @Jorge: I was waiting for your comment! :-) Yeah, I think I understand how it hurts. But it’s true! Also, I was never persuaded by the endless battle against pie charts. Regarding emotion and aesthetics the y DO play a role and we just cannot avoid it. The sooner we recognize that the better.

    @Martin: I am glad to hear this post was useful and it’s great to know you are doing research in this area. Just a little note: I would be careful in discarding the Graphical Perception Theory. I hope I am not suggesting with my post it has no value! As I said it’s the best we have so far and it’s very well crafted. i think many of these things are still valid, it’s just that we need to do more.

  6. Jan Willem Tulp

    Your post really mused in my mind, and I was wondering if the Visual Information Seeking mantra by Ben Schneiderman (“overview first, zoom and filter, then details-on-demand”) ( is some useful reference: It might be related to your question, especially the ‘overview first’ part may be the similar to your intuition that we (initially) judge a visual representation in gross terms.

    Also, I wonder if the volume of information being visualized is a factor which influences your intuition and accuracy measurement research: for just a single bar chart accuracy may be the right thing to measure, but the larger the volume becomes, the less important an accuracy measurement will be initially, and an overview first will be more important (is there research on that?)

  7. jakob

    @Enrico Thanks for clearing that up. Yes, saturation and lightness are visible as a continuum, I should have thought of color this way. Still, I think the prominence of “proximity” (spatial coordinates) over other means of visualization is justified in cognitive science in so far as there is greater salience to spatial relations in our perception. Some people (Jackendorf for example) would go even further and say that spatial relations are the core of attributes for concepts, a view linguists call

    1. Enrico Post author

      @Jakob You are right, position is definitely the most relevant visual feature and there are no doubts about that, but the ranking includes things like volume and orientation which are never used in moder visualization and my guess is that they are not very powerful. Plus, of course color is the only feature which has no spatial extent and this makes it a very special one.

      @Jan Complexity is indeed a big issue here. This is another reason why I am a bit skeptical. It not clear how these rules scale top the complexity of modern visualizations. I think that the whole theory of attention and what helps to direct our eyes has a big role here, as well as, our mental schema and models.

  8. SellaHaAdom

    Great work, Enrico! I haven’t had such an inspiring reading for a long time! Thank you for sharing you thoughts on this.

    I fully agree, why should we judge by accuracy?
    Many researchers have studied instead the perceptual bias, as a measure of efficiency!
    In these work, they pointed to different biases that should be avoided when designing visualizations…
    May be we should ask, when does info viz tell the “wrong” story?

    1. Enrico Post author

      Yeah I agree … and sometimes it’s the right story in the wrong way. Which is actually more relevant from the visualization point of view. I’d be happy to hear more about these works on bias you mentioned. They look very interesting. Can you please let me know more about it? Thanks.


Leave a Reply

Your email address will not be published. Required fields are marked *