How Do We Achieve the Right “E-Cube-Librium” in Visualization Marathons?

by Enrico on February 14, 2012

in Thoughts

“Huston we have a problem …”

I just received this in my inbox:

We want to again express our sincerest gratitude for your help in making the Visualizing Marathon 2011 such a resounding success. Your participation was instrumental and the 376 students who competed in Sydney, São Paulo, New York, London, and Berlin told us how excited they were to meet you and have their work reviewed by such an esteemed global jury.

We just announced that the winner of the $10,000 “Imagination at Work” Grand Prize is Columbia University for E-Cube-Librium [...] Out of 15 finalists, the Grand Prize was awarded to the project that “best illuminates a new insight or solution to a complex problem through data visualization. [bold is mine]

I receive this because I was part of the jury for the Marathon in Berlin. Visualizing Marathon is a series of events (inspired by the more famous hackatons) organized by Visualizing.org around the world to promote visualization. Groups of students develop a visualization for a given dataset/problem in 24 hours and Visualizing.org gives awards to the best entries.

Being a juror was fun and and an honor for me, as well as being one of the speakers at Visualizing Europe last year. I am grateful to Visualizing for the great work they are doing in terms of promotion, and also for their commitment to building a solid platform for visualization designers. Nonetheless, I think we have a problem.

I look at E-Cube-Librium  and I cannot help but think: “Is this the best 376 students from all over the world can produce?” It just doesn’t match with my definition visualizations that “best illuminates a new insight or solution to a complex problem“.

I am really sorry I have to say that, especially because I am sure the students did their best and are probably proud of their work, and also because I am sure the guys at Visualizing.org have the best intensions in their mind. But I am also concerned that people around the world would look at the best prize winner and think this is the gold standard of visualization. We have to be careful, especially now that visualization is in the mainstream, about what message we give. I have seen too many times visualization dismissed altogether because people think it’s only pretty picture. Our reputation and future is at stake here.

Now, we have a nice series of events organized around the world and I am all in favor of data visualization evangelism, but why are the results disappointing? Is it an intrinsic problem of marathons and contests or maybe we can engineer the whole thing to make it more effective? Here are some potential explanations I can tell from my experience:

  1. Time is too short to produce quality results. Every time I complain about the quality of the results there is someone who points out that time is too short. I am not fully convinced time is the main problem however, even though I do think time is too short. Basic design choices do not depend on the amount of time. It doesn’t take time to know a 3D visualization of numeric data should not be your first choice when designing a visualization, it takes knowledge.
  2. Students are not well prepared. That students are not knowledgeable enough to produce quality results is not surprising. Visualization has become mainstream very recently and there is not a clear path to follow if one wants to become an expert. Nonetheless, some of the entries I personally reviewed as a juror were more than reasonable, especially given the 24hrs constraints! Also, giving a look to the page with the whole set of winners and honorable mentions it’s surprising to notice how neat solutions coexist together with very questionable ones.
  3. Jurors select the wrong entries. Another possibility is that jurors just pick the wrong entries. I don’t know who selected the grand prize winner, I was not involved in the process, but my feeling is that here we might have the biggest mismatch. When I participated as a juror it became clear to me how things can go wrong. Some people put clarity and information throughput before everything else (guess who?), others judge things from their coolness factor. I know, it’s sad but that’s the way it is.
How can we make better marathons? A few modest suggestions.
Without pretending to provide all encompassing or particularly clever solutions here are few things that come into my mind:
  • Give more time. If time is too short why not giving more time? The marathon format does not lend itself to data visualization. Visualization is a process, a tortuous process actually, with lots of dead ends along the road. Pretending to visualize data effectively in 24 hours might be an unrealistic goal.
  • Train students before the marathon takes place. If students are not good enough why not giving them some training before running the marathon? There are many professionals out there who are able to explain in a concise manner what are the no nos and the good practices of visualization.
  • Run marathons without prizes. Maybe a marathon could be held without giving a prize? I don’t know … is the perspective of receiving a prize that motivate students to do their best? Maybe not. Maybe just knowing that they will have the opportunity to get trained by a professional and to have a certain level of exposure will motivate them enough. I think competition is totally overrated.
  • Let people judge in place of jurors. One option could be to have “better” jurors but then we would have to discuss what we mean by “better”. As an alternative, why not letting people judge? I am not sure the result would be better but at least we could claim it is a democratic process and it wouldn’t embarrass any jurors.

And you? What do you think? Do you have any concerns with contests and marathons? How would you shape your own marathon event? Do you have any suggestion on how to improve the situation? I’d love to hear your voice.

IMPORTANT NOTE: I had the chance to discuss with some people at Visualizing before publishing this post. Since I totally respect their work and wanted to avoid slashing them with an overly unfavorable post, I decided to let them read it before publishing it. Apart from a few sentences here and there the post is still the same as the original draft.

Charlene Manuel was also so kind to send me a long reply to this post which I decided to publish soon as a post rather than a comment so that everyone will get the feeling of how Visualizing is handling this criticism.

I am very satisfied with this process. I think we all have to be happy to see that it is possible to have constructive criticism and make the whole field thrive without unnecessary battles.

UPDATE: here is the answer from the Visualizing team.

  • http://www.uni-salzburg.at/portal/page?_pageid=142,1269561&_dad=portal&_schema=PORTAL Martin

    Hi Enrico!

    Thanks for sharing your impressions. I guess you are right … Zapping through the gallery makes it obvious that the “coolness-factor” was valued most. Unfortunately Edward Tufte has not been a member of the jury board.

    Most of the visualizations remind me of ZEIT/Spiegel like infographics, which are definitely not bad at all. But they are journalism. For example http://www.visualizing.org/full-screen/37407 is journalism in the best and politics in the worst case ;-)

    I guess most of the participants have a design and not a statistical/analytical/cognitive science background?

    • Anonymous

      As I said I have nothing agains coolness, it’s just the mismatch between what is claimed to be and what it is that concerns me a bit. But the guys behind Visualizing are very open to criticism and they really have the best intentions.

  • infosthetics

    ““Is this the best 376 students from all over the world can produce?“”
    Yes, I would state it is!

    “It just doesn’t match with my definition [of] visualizations”
    Yes, but maybe it matches with other definitions of ‘visualization’? For instance in a definition that ranked projects on originality, innovation, aesthetics and visual style for 50% of the total mark?

    “Some people put clarity and information throughput before everything else (guess who?), others judge things from their coolness factor. I know, it’s sad but that’s the way it is.”
    Why is this sad? In the contrary, it is an immense opportunity.

    “people … look at the best prize winner and think this is the gold standard”, “our reputation and future is at stake here”
    Student competitions tend to have a particular purpose in our society, and it is generally not to demonstrate domain-specific dogmas, or to put forward a “gold standard” I would think there are other competitions out there better suited to accomplish this (e.g. NSF SciVis, Visweek contest)?

    “Let people judge in place of jurors”, “… but then we would have to discuss what we mean by “better””
    I think you mean: “we would have to discuss what *they* mean by ‘better’” (“they” being the people at large) . I am not sure whether you really want to engage in that discussion…

    • Anonymous

      Ah, that’s Andrew at his best! My rebuttal:

      >>”Is this the best 376 students from all over the world can produce?”
      >> Yes, I would state it is!

      No, I have been a juror and I know that students can do better. I am also a teacher and know very well students can do a whole lot better when properly trained.

      >>”It just doesn’t match with my definition [of] visualizations”

      >>Yes, but maybe it matches with other definitions of ‘visualization’? For instance in a definition
      >>that ranked projects on originality, innovation, aesthetics and visual style for 50% of the total
      >>mark?

      Fine with me as long as you don’r claim the winner has been selected because it “best illuminates a new insight or solution to a complex problem through data visualization”. The solution does not do that.

      >>”Some people put clarity and information throughput before everything else (guess who?), others
      >>judge things from their coolness factor. I know, it’s sad but that’s the way it is.”

      >>Why is this sad? In the contrary, it is an immense opportunity.

      Because we live in data and information overload, we don’t have the luxury to put the coolness factor in front of clarity. Also, as I said above, it’s all fine as long as the purpose is clearly stated.

      >>”people … look at the best prize winner and think this is the gold standard”, “our reputation and
      >>future is at stake here”

      >>Student competitions tend to have a particular purpose in our society, and it is generally not to
      >>demonstrate domain-specific dogmas, or to put forward a “gold standard” I would think there are
      >>other competitions out there better suited to accomplish this (e.g. NSF SciVis, Visweek contest)?

      Let’s put it this way: what happens if you start giving the best marks to your sub-optimal students?

      >>”Let people judge in place of jurors”, “… but then we would have to discuss what we mean by
      >>“better””

      >>I think you mean: “we would have to discuss what *they* mean by ‘better’” (“they” being the
      >>people at large) . I am not sure whether you really want to engage in that discussion…

      I mean we should discuss who is qualified to be a juror and who is not. I have a couple of very personal ideas on who would be qualified, but this might indeed be very questionable.

      Thanks Andrew for your alternative thoughts as usual! You know … after all it’s a lot of fun. Can I reserve a spot for you in Data Stories (www.datastori.es)?

  • http://www.excelcharts.com/blog/ Jorge Camoes

    Enrico, this can be solved with these simple rules:

    1. There is a “Stephen Few Prize” and a “David McCandless Prize” and students should decide where they want to compete on a limited and first-come, first-served basis (so that the Stephen Few prize gets some projects :) )

    2. Two groups should study the winners for five minutes and write down all the insights they get from each of them. The overall winner is the one with more insights, obviously.

    We need more and better chart types. Probably the students can come up with interesting (because unfiltered) ideas, so don’t worry much about these results.

    However, these competitions should be clearer about their goals. If coolness is what they want that’s not a problem. But if they want insights and the winner is based on how cool it is, then there is a conflict to solve.

    • Anonymous

      LOL!!! … I love the idea of the two prizes. The Stephen Few prize reminds me how scared I was at school with some of my teachers.

  • Matt Fehskens // @gonzofish

    I have to absolutely agree here. If history has proven one thing in almost any industry or discipline, it’s that the form will improve with function. Heck, it can even come along at the same pace. The problem is that it seems you have a journalistic approach here where we don’t necessarily have hardcore stats people, but rather a more artsy group trying to develop these “new insights”. Like you said, students need solid training prior to the competition. if they have a better understanding of what certain algos (or combination of algos) best fit their data set, they end up with a better foundation to bring the form/beauty/coolness to a new level.

  • http://twitter.com/gregmore OOM Creative

    Data visualization is not a sport, however a marathon over a couple of days is a sprint in terms of what a normal data visualisation project takes to design & develop. The main thing I find missing in these comps is the role of the client, as one gets most insights from clients who can read into your data visualisations, and alas the students who may get data from a host organisation probably don’t get direct response & feedback from client who would view their data vis work in development.

    • Ben

      I understand where you are coming from Greg, but do you really think 48 hours is a sprint. If you consider that there are 5-6 people on each team working over 48 hours, at a diminishing rate of return to be sure, you get 2+ weeks in person days of effort. Even if we halve this and say 1 week of productive work among a diverse group of people you should get some good results.

      • http://twitter.com/gregmore OOM Creative

        yes, it’s a sprint. I think the longer format Visualising comps generate better outcomes as they also allow for more reflection in the design processby students or professionals

  • Ben Hosken

    Having been part of the Sydney leg of the events and specifically haven met and spoken to many of the students who participated I find your comments quite disingenuous and I would imagine many of the 376 students would too.

    The concept of the 24 or 48 hour marathon is very sound if you look at it from the realistic perspective that no one expects you to have a polished piece produced within this time. The concept is as much about working as a team on a challenge as it is on producing a client deliverable. They are very different beasts.

    That said, I would argue that the winner’s piece is as good or better than many pieces produced by “professionals” and it was done in 48 hours.

    With regards to training beforehand, I would completely disagree. The most wonderful thing I saw at the Sydney event was the massive diversity of skills the students had. There were Computer Science, Design Computing, Fine Arts, Graphic Design, and even Industrial Designers all focusing on a topic which was no doubt very foreign to them….Health and ageing in Australia. This may be quite different to your experience in Berlin where you may well have had info viz students only?

    One of my comments on the day was that I would love to have seen some nursing or social health students partake in this event as well. Why? Because some (I would say the biggest) of the challenges in creating data visualisations is not the correct use of pre-attentive perception theory but rather understand the domain of the problem and the client.

    Rather than focus on why it didn’t meet a particular set of data viz dogma…whose ever that may be; I suggest that we strongly encourage this type of event that brings together such a range of students from diverse disciplines and look for the opportunity to expand the outreach to include students as well.

    It is a student competition and should ultimately be looking to develop both the next generation of data visualisation professionals and, more importantly I would argue, the bridge between professions with current and future data problems and the field of data visualisation.

    Ben Hosken
    Founder
    Flink Labs

    • Anonymous

      Thanks Ben for sharing with us your direct experience with the event. A few thoughts on your comment …

      1. “That said, I would argue that the winner’s piece is as good or better than many pieces produced by “professionals” and it was done in 48 hours.”

      Maybe yes … if you refer to “professionals”. But does this give more credit to the solution?

      2. “Because some (I would say the biggest) of the challenges in creating data visualisations is not the correct use of pre-attentive perception theory but rather understand the domain of the problem and the client. ”

      I cannot agree more!!! But do you think the prize winner would help experts make sense of these data?

      3. “I suggest that we strongly encourage this type of event that brings together such a range of students from diverse disciplines and look for the opportunity to expand the outreach to include students as well.”

      I am all in favor of such events, I am only concerned with the prize. And this is also the reason why I tried to include Visualizing directly into the discussion. I hope all this will help.

      • http://twitter.com/gregmore OOM Creative

        Universities generally have very strict rules for how student work is disseminated, published, distributed etc. These are normally to protect students as well as universities from issues around use and abuse of course generated material. It’s also so student material isn’t taken out of context – providing a safer & risk free environment to test ideas as part of the learning experience.

        web 2.0 submission portals for comps completely break from this model, and therefore we have discussions about outcomes that potentially wouldn’t have garnered this level of interest in the past.

        Student and done in 48hrs – however once in the public domain (net) then open for discussion from all angles. This thread is a great way to advance the best approach for short format hothouse comps for students interested in data viz. It’s an emerging field, but whether comps work for it or not is yet to be seen.

  • http://blog.jochmann.me Jakob

    Sorry, I’d love to take my time to really present a coherent argument, but I can only spare a few minutes. Here goes…

    My main issue lies with the way you approach failure. You do not seem to view it as a learning opportunity. A chance to find out where things went wrong–in the design that is, rather than in the education of the students you shoot down ever so nonchalantly in a tangent. Maybe they do have useful knowledge, they just don’t yet know how to apply it successfully.

    Perhaps you did not mean it that way, but in spite of all your hedging (asking for approval before publishing) this rant of yours seems a bit overzealous and, frankly, cynical to the point of being smug coming from a member of the panel. It does nothing to help bridge the gap and is just plain unfair to the students who are not part of our little online debate club to defend themselves. Not that it mattered if I understand correctly that their opinion does not matter to you when you say that you are looking for better students to take their place.

    The premise of your post does not allow for garnering anything useful out of the designs or the process that led up to the results we are presented with. I agree that a lot of the entries do fail to communicate effectively. But I disagree that the works do not merit the benefit of the doubt to at least look for decisions that made sense to their creators at the time and find out why they did not work. Or, better yet, which aspects actually do work, albeit in an incomplete and not yet functional way. That’s something good that can come from sprints, you know, pressure to not overthink things, pressure to deliver, pressure to one-up the competition and, if the stars align, innovate.

    “Because they are not following THE METHOD(tm)” surely can’t be why you put them down, you looking for innovative uses of visualization as a scientist after all? Yet sometimes the unsubstantiated use of “best” when people characterize visualization reeks of fundamentalism, because “best” is hardly ever used as a relative concept. For crying out loud, some people believe that you (and other visualization people with a statistics background) only accept bar graphs as valid and that every chart that is not one goes against a divine dogma.

    Surely that is not the kind of line you meant to draw in the sand?

    • Anonymous

      Thanks Jacob. My answer below.

      1. “My main issue lies with the way you approach failure. You do not seem to view it as a learning opportunity …”

      From what part of my post can you infer that? I do give a lot of importance to failure! I actually think the best designs come from a lot of trial and error (see Moritz Stefaner chapter in Beautiful Visualization). I am in fact concerned that many visualizations are posted on the web as they are, without a minimal description of the process followed. Also, while I am all in favor of letting students learn from doing a lot of errors, I cannot find a single reason for giving them a prize based on their “failure” while claiming it “illuminates”.

      2. “… It does nothing to help bridge the gap and is just plain unfair to the students who are not part of our little online debate club to defend themselves.”

      I think it does in the very moment the debate opens the way to a candid and constructive discussion with the organizers. I was very happy to listen to what they had to say and I think they found it important to take into consideration my concerns. You will see that in their reply I’ll post tomorrow.

      3. “… their opinion does not matter to you when you say that you are looking for better students to take their place.”

      Where did I say that?

      4. “… The premise of your post does not allow for garnering anything useful out of the designs or the process”

      It’s not my premise that does not allow that but the format of the event itself. What people see, unless they participated to the event, is only the final product, not the process. And this is the only way I have to judge it.

      5. “Because they are not following THE METHOD(tm)” surely can’t be why you put them down, you looking for innovative uses of visualization as a scientist after all?”

      I dont think in visualization there is THE METHOD but I do think there are ineffective methods. After writing the first draft of my post I wanted to double-check whether the entry had some hidden value I did not notice. I spent considerable time reading the whole documentation and played the role of the intended audience. Maybe I am the only one who find it hard to understand but, believe me, I could not get any useful information out of it. Have you tried to use the visualization yourself? Can you learn something out of it?

      6. “For crying out loud, some people believe that you (and other visualization people with a statistics background) only accept bar graphs as valid and that every chart that is not one goes against a divine dogma.”

      It hurts if this is the impression you get from me because I am myself annoyed by overly dogmatic criticism. But you can give a look to my own work and see with your eyes I am not doing (only) bar graphs >> enrico.bertini.me. You might also want to give a look to my post “Do visualizations need to be “accurate”?” >> http://bit.ly/xK9FYo, I think you would find it interesting. However, I do think we have a problem when the E-Cube-Librium is awarded the grand prize because it “illuminates a new insight or solution to a complex problem”.

      Thanks again for your post.

  • Visualizing

    As Enrico mentions, we have already sent in some comments on his perspective, but we would like to provide context for this current discussion.

    For our Visualizing challenges and marathons, visualizations are judged based off of three criteria: understanding (50%), originality (25%) and style (25%). Our global jury discussed and reviewed the visualizations with this in mind and selected the winners and honorable mentions for each city. However, when it came time to select the Grand Prize winner, we asked for every team to submit an essay that shared with us an insight or solution that emerged from their process and work. E-Cube-Librium was awarded based off of the insight/solution on sustainable development proposed in their essay. The full essay is included here http://www.visualizing.org/stories/visualizing-marathon-2011-grand-prize-winner and we encourage everyone to read what the students have to say.

    We think this is a valuable discussion and look forward to the feedback from our expanded post.
    4:16 PM

  • http://twitter.com/moebio140 Santiago Ortiz

    This is my humble opinion on this matter: what fails here is to have a single winner. And it’s a failure in two levels:

    – first because it sends an undesired message: that the main purpose of these amazing workshops is to setup the conditions for a single great project creation and to identify the next top notch visualization developer or team. That might be the case for other contests but I don´t believe is the case here.

    – second because I do believe the results are great, the level attained is high… if you see all the projects altogether. At the same time, any vis expert will find issues in any of the projects seen one by one (something that happens even with guru’s creations). You won´t find a single perfect project, so if you choose only one to represent the total, its issues will be highlighted. I think the marathon achieved an amazing goal: to produce dozens of projects that have a high level in average with a low standard deviation. From the point of view of the goals of the workshop (democratization) this is really great, something to celebrate.

    My point is that if the final selection were made of 10 projects, for instance, the perception of the level of the entire set of projects would be higher.

    • Anonymous

      Santiago, this is very much in the spirit of what I wrote, I totally agree. I mean, as soon as a prize is given you open the door to criticism. We have exactly the same in the academic environment. VisWeek gives a best paper award every year and there are always some discussions around it. Remember the last rant of Stephen Few on VisWeek 2011 best paper award (http://www.perceptualedge.com/blog/?p=1090)? This is of course even more critical when you have students developing visualizations in 24hrs!

  • Anonymous

    I am not saying that the contest attracts students that are less trained than mine, what I want to say is that it would be very beneficial if they could receive “some” training before the contest.

    Regarding the different ways in which you can interpret “best illuminates”, I agree to some extent to what you say, at least in principle, but hey … if visualization doesn’t help to (better) understand a phenomenon first, then everything is relative. You can call anything visual data visualization and there is no way to guide people through graphical excellence. I am all in favor of beauty and experiments but not to the detriment of clarity.

    Putting the coolness factor in front of clarity sends a deviated message. I agree there is the risk to hinder creativity and experimentation to some extent but “proper” experimentation IMHO comes from people who know they are bending the rules not from those who don’t know the rules in the first place (of course now this is the very dogmatic version of me :-))

    “A mediocre entry will still win if it is “better” than all other entries.”

    I think this is a very relevant problem for contests. You see the same with academic conferences. There are conferences that claim to be good because they have a low acceptance rate, but then they receive only those papers than have been rejected from the top-notch conferences. And you can see the difference.

    Thanks a lot! Very stimulating discussion.

Previous post:

Next post: