The Netflix Prize leaderboard continues to be a fascinating proof of the value of experimentation when working with big data.
The top entries include teams of graduate students from around the world, with eastern Europe particularly well represented. The second best entry at the moment is from undergraduates from Princeton (kudos, Dinosaur Planet).
Some of the teams disclose information about their solutions, enough to make it clear that the teams are playing with a wide variety of techniques.
I love the "King of the Hill" approach to these kinds of problems. There should be no sacred cows, no egos preventing people from trying and testing new techniques. From the seasoned researcher to the summer intern, anyone should be able to try their hand at the problem and build on what works.
Please also see my July 2007 post, "Netflix Prize enabling recommender research", and my June 2007 post, "Latest on the Netflix Prize".
See also my April 2006 post, "Early Amazon: Shopping cart recommendations", for an example from the early days of Amazon of the value of A/B testing and experimentation.
Update: The KDD Cup 2007 papers are available. They give a nice flavor for the approaches (mostly based on twiddles to SVD) that are currently near the lead.