ICFP 2012 Results

(the way they ought to have been)

For the most part, the ICFP 2012 programming competition was great contest. There was one major fly in the ointment, however -- and that had to do with how submissions were evaluated. Each map contributed to a team's score according to how many lambdas it contained; this meant that a great performance on a small map contributed very little to one's score, whereas nearly all the points in the contest could be found on the last two maps.

This is, to use a technical term, a completely bogus way to judge a competition.

Because no two maps carried the same weight, this lead to some suprising results. The #1 team scored more points on the last two maps alone than the #2 team scored the entire contest. Although the top ten finishers in every round were concealed ("to keep the suspense"), the fact was that team standings at the beginning of the final round bore no relationship to who actually won. uguu.org, for example, entered the final round at 52nd place, only to take third at the end. The opposite happened too: e.g., Hacking in the Rain fell from sixth place to finish at #41.

This is my attempt to make a more "fair" scoring system, one that rewards consistent performance on a wide variety of maps equally. It works as follows:

  1. For each map, a team is awarded points according to the fraction of teams with an equal or lesser score on the map. For example, a team with the best score on the map is awarded 1.0 points; the median team would be awarded 0.5 points, and the lowest scoring team would receive close to 0 points.
  2. This value is then multiplied by a weighting factor. The first ten maps in both divisions are each worth 8 points; the last two maps (main division) and last four maps (lightning division) are worth 10 points. In total, there is a maximum score of 100 in the main division and 120 in the lightning division.
  3. Teams that score zero (or below!) on a map receive zero points for that map.

Note that there are no sour grapes here. My own standing changes very little under this method of scoring, but other teams have some wildly different results. Fortunately, both systems agree on first and second place in the main division (esp. in light of Frictionless Bananas being head-and-shoulders above all other contestants), but in the lightning division, this system has GroML as the true winner, rather than HITORI.

The results are: