For the most part, the ICFP 2012 programming competition was great contest. There was one major fly in the ointment, however -- and that had to do with how submissions were evaluated. Each map contributed to a team's score according to how many lambdas it contained; this meant that a great performance on a small map contributed very little to one's score, whereas nearly all the points in the contest could be found on the last two maps.
This is, to use a technical term, a completely bogus way to judge a competition.
Because no two maps carried the same weight, this lead to some suprising results. The #1 team scored more points on the last two maps alone than the #2 team scored the entire contest. Although the top ten finishers in every round were concealed ("to keep the suspense"), the fact was that team standings at the beginning of the final round bore no relationship to who actually won. uguu.org, for example, entered the final round at 52nd place, only to take third at the end. The opposite happened too: e.g., Hacking in the Rain fell from sixth place to finish at #41.
This is my attempt to make a more "fair" scoring system, one that rewards consistent performance on a wide variety of maps equally. It works as follows:
Note that there are no sour grapes here. My own standing changes very little under this method of scoring, but other teams have some wildly different results. Fortunately, both systems agree on first and second place in the main division (esp. in light of Frictionless Bananas being head-and-shoulders above all other contestants), but in the lightning division, this system has GroML as the true winner, rather than HITORI.
The results are: