How Do We Achieve the Right “E-Cube-Librium” in Visualization Marathons?

“Huston we have a problem …”

I just received this in my inbox:

We want to again express our sincerest gratitude for your help in making the Visualizing Marathon 2011 such a resounding success. Your participation was instrumental and the 376 students who competed in Sydney, São Paulo, New York, London, and Berlin told us how excited they were to meet you and have their work reviewed by such an esteemed global jury.

We just announced that the winner of the $10,000 “Imagination at Work” Grand Prize is Columbia University for E-Cube-Librium […] Out of 15 finalists, the Grand Prize was awarded to the project that “best illuminates a new insight or solution to a complex problem through data visualization. [bold is mine]

I receive this because I was part of the jury for the Marathon in Berlin. Visualizing Marathon is a series of events (inspired by the more famous hackatons) organized by around the world to promote visualization. Groups of students develop a visualization for a given dataset/problem in 24 hours and gives awards to the best entries.

Being a juror was fun and and an honor for me, as well as being one of the speakers at Visualizing Europe last year. I am grateful to Visualizing for the great work they are doing in terms of promotion, and also for their commitment to building a solid platform for visualization designers. Nonetheless, I think we have a problem.

I look at E-Cube-Librium  and I cannot help but think: “Is this the best 376 students from all over the world can produce?” It just doesn’t match with my definition visualizations that “best illuminates a new insight or solution to a complex problem“.

I am really sorry I have to say that, especially because I am sure the students did their best and are probably proud of their work, and also because I am sure the guys at have the best intensions in their mind. But I am also concerned that people around the world would look at the best prize winner and think this is the gold standard of visualization. We have to be careful, especially now that visualization is in the mainstream, about what message we give. I have seen too many times visualization dismissed altogether because people think it’s only pretty picture. Our reputation and future is at stake here.

Now, we have a nice series of events organized around the world and I am all in favor of data visualization evangelism, but why are the results disappointing? Is it an intrinsic problem of marathons and contests or maybe we can engineer the whole thing to make it more effective? Here are some potential explanations I can tell from my experience:

  1. Time is too short to produce quality results. Every time I complain about the quality of the results there is someone who points out that time is too short. I am not fully convinced time is the main problem however, even though I do think time is too short. Basic design choices do not depend on the amount of time. It doesn’t take time to know a 3D visualization of numeric data should not be your first choice when designing a visualization, it takes knowledge.
  2. Students are not well prepared. That students are not knowledgeable enough to produce quality results is not surprising. Visualization has become mainstream very recently and there is not a clear path to follow if one wants to become an expert. Nonetheless, some of the entries I personally reviewed as a juror were more than reasonable, especially given the 24hrs constraints! Also, giving a look to the page with the whole set of winners and honorable mentions it’s surprising to notice how neat solutions coexist together with very questionable ones.
  3. Jurors select the wrong entries. Another possibility is that jurors just pick the wrong entries. I don’t know who selected the grand prize winner, I was not involved in the process, but my feeling is that here we might have the biggest mismatch. When I participated as a juror it became clear to me how things can go wrong. Some people put clarity and information throughput before everything else (guess who?), others judge things from their coolness factor. I know, it’s sad but that’s the way it is.
How can we make better marathons? A few modest suggestions.
Without pretending to provide all encompassing or particularly clever solutions here are few things that come into my mind:
  • Give more time. If time is too short why not giving more time? The marathon format does not lend itself to data visualization. Visualization is a process, a tortuous process actually, with lots of dead ends along the road. Pretending to visualize data effectively in 24 hours might be an unrealistic goal.
  • Train students before the marathon takes place. If students are not good enough why not giving them some training before running the marathon? There are many professionals out there who are able to explain in a concise manner what are the no nos and the good practices of visualization.
  • Run marathons without prizes. Maybe a marathon could be held without giving a prize? I don’t know … is the perspective of receiving a prize that motivate students to do their best? Maybe not. Maybe just knowing that they will have the opportunity to get trained by a professional and to have a certain level of exposure will motivate them enough. I think competition is totally overrated.
  • Let people judge in place of jurors. One option could be to have “better” jurors but then we would have to discuss what we mean by “better”. As an alternative, why not letting people judge? I am not sure the result would be better but at least we could claim it is a democratic process and it wouldn’t embarrass any jurors.

And you? What do you think? Do you have any concerns with contests and marathons? How would you shape your own marathon event? Do you have any suggestion on how to improve the situation? I’d love to hear your voice.

IMPORTANT NOTE: I had the chance to discuss with some people at Visualizing before publishing this post. Since I totally respect their work and wanted to avoid slashing them with an overly unfavorable post, I decided to let them read it before publishing it. Apart from a few sentences here and there the post is still the same as the original draft.

Charlene Manuel was also so kind to send me a long reply to this post which I decided to publish soon as a post rather than a comment so that everyone will get the feeling of how Visualizing is handling this criticism.

I am very satisfied with this process. I think we all have to be happy to see that it is possible to have constructive criticism and make the whole field thrive without unnecessary battles.

UPDATE: here is the answer from the Visualizing team.