Amazon is well known for their fun feature "Customers who bought this also bought". It is a great way to discover related books.
Internally, that feature is called similarities. Using the feature repeatedly, hopping from detail page to detail page, is called similarity surfing.
A very sharp and experienced developer named Eric wrote the first version of similarities that made it out to the Amazon website. It was great working with Eric. I learned much from him over the years.
The first version of similarities was quite popular. But it had a problem, the Harry Potter problem.
Oh, yes, Harry Potter. Harry Potter is a runaway bestseller. Kids buy it. Adults buy it. Everyone buys it.
So, take a book, any book. If you look at all the customers who bought that book, then look at what other books they bought, rest assured, most of them have bought Harry Potter.
This kind of similarity is not very useful. If I'm looking at the book "The Psychology of Computer Programming", telling me that customers are also interested in Harry Potter is not helpful. Recommending "Peopleware" and "The Mythical Man Month", that is pretty helpful.
Solving this problem is not as easy as it might appear. Some of the more obvious solutions create other problems, some of which are more serious than the original.
After much experimentation, I discovered a new version of similarities that worked quite well. The similarities were non-obvious, helpful, and useful. Heck, while I was at it, I threw in some performance improvements as well. Very fun stuff.
When this new version of similarities hit the website, Jeff Bezos walked into my office and literally bowed before me. On his knees, he chanted, "I am not worthy, I am not worthy."
I didn't know what to say then, and I don't know what to say now. But that memory will stick with me forever.
Subscribe to:
Post Comments (Atom)
6 comments:
Wow. That would be the highlight of my entire professional career, a billionaire doing that. Wow.
This is exactly why I love the Amazon series of posts.
You know, I've noticed that non-technical people don't understand that sometimes, what appears to be simple - is actually very difficult & a what appears to be a BIG change - is actually easy.
Congrats for getting recognition from the top, that's rare (for most of us in the corporate world).
So.. any hints on the solution? It seems the first thing I'd try would be to weight my co-occurrences by content. I.e. rather than doing just a simple purchase "link" analysis, I would weight the links higher or lower, based on the textual or content (objective) similarity of the two media objects in question. Perhaps I'd also throw in metadata, such as category (home:garden, electronics, etc) where available and appropriate.
Is this kinda along the lines of what you did?
(BTW, I will evenutally go non-anonymous, as you asked, once I finally set up a blog. I'm too lazy at the moment. Workin' on it..)
I, too, am interested in hearing about the solution, although I realize Greg has absolutely no incentive to share it (and probably some disincentives, in the form of NDAs?).
My guess is that one criteria being considered is that the higher the sales rank of a book, the less it counts in the recommendation engine. If the purchase rate=100%, the recommendation weight=0, and conversely if the purchase rate=.000000001%, then recommendation weight=9.9 (on 10 pt scale, of course).
So, if you bought Harry Potter (imaginary weight=0.1) and One-Legged Poets of the Seventh Century BC (9.8), you're recommendations would be far more heavily weighed towards weird poet books. Of course, the disparity would have to be smaller than that, to ensure that if you bought ten wizard books and one poet book, you got more wizard recommendations than poets. So I guess you'd run with a smaller range, say a default of 6 and a max of 10, so a Harry Potter book would be 6.1 and the less popular book would be 9.8, with extra books in a series counting a lot less than a full book. So, if you bought two equally popular non-series wizard books from different series, they'd count 12.2 against wizard recommendations, while two Harry Potter books would count considerably less, perhaps 8.1 (one full book and one third book) and one poet book would count 9.8. Weighing all the various numbers agains other user's histories would give results you could use, as opposed to a simple similarity system.
Just rambling some ideas...
nathan, that sounds plausible. I like your idea better than mine, actually. Much simpler. It kinda sounds like an IDF (inverse document frequency) weight. Things with near ubiquity contribute very little to "relevance".
Reminds me of the Greiff 1998 SIGIR paper.. it is actually items in the middle of the IDF curve that give you the best performance. Very low idf (words like "the" and "an") are near useless. Very high idf (words like "snighzimpup") are also near useless. It is words in the mid-range that are the most valuable.
I can see the same thing for books. If a book is purchased near ubiquitously (Harry Potter) it is not very valuable. If a book is purchased only once, and you happen to also buy something that one other person also bought, that can be equally useless...because you've got high potential variance with only a single datapoint.
But for that stuff in the middle.. a medium amount of purchases, with a medium amount of "people who also bought" links.. I can see why that'd be golden.
Post a Comment