Some very useful lessons in this work in a recent WSDM 2011 conference, "Personalizing Web Search using Long Term Browsing History" (PDF).
First, they focused on a simple and low risk approach to personalization, reordering results below the first few. There are a lot of what are essentially ties in the ranking of results after the first 1-2 results; the ranker cannot tell the difference between the results and is ordering them arbitrarily. Targeting the results the ranker cannot differentiate is not only low risk, but more likely to yield easy improvements.
Second, they did a large scale online evaluation of their personalization approach using click data as judgement of quality. That's pretty rare but important, especially for personalized search where some random offline human judge is unlikely to know the original searcher's intent.
Third, their goal was not to be perfect, but just help more often than hurt. And, in fact, that is what they did, with the best performing algorithm "improving 2.7 times more queries than it harms".
I think those are good lessons for others working on personalized search or even personalization in general. You can take baby steps toward personalization. You can start with minor reordering of pages. You can make low risk changes lower down to the page or only when the results are otherwise tied for quality. As you get more aggressive, with each step, you can verify that each step does more good than harm.
One thing I don't like about the paper is that they only investigated using long-term history. There is a lot of evidence (e.g.  ) that very recent history, your last couple searches and clicks, can be important, since they may show frustration in an attempt to satisfy some task. But otherwise great lessons in this work out of Microsoft Research.