Googler Bryan Horling recently was on a panel with Danny Sullivan at SMX and talked about personalized search. A few people ([1] [2] [3]) posted notes on the session.
Not too much there, but one interesting tidbit is the way Google is thinking about personalization coming from three data sources, localization data (IP address or information in the history that indicates location), short-term history (specific information from immediately preceding searches), and long-term history (broad category interests and preferences summarized from months of history).
A couple examples were offered as well, such as a search for [jordans] showing the furniture store rather than Michael Jordan if the immediately preceding search was for [ethan allan], a search for [galaxy] showing LA Galaxy sports sites higher in the rankings if the searcher has a long-term history of looking at sports, and favoring web sites the searcher has seen in the past. Curiously, none of these examples worked as described when I tried them just now, but it is still interesting to think about it.
What I like best about what Bryan described is that the personalization is subtle, only doing minor reorderings. It uses the tidbits of additional information about your intent in your history to make it just a little bit quicker to find what you probably are seeking. It's a nice, low risk approach to experimenting with personalization, making only small changes that are likely to be helpful.
Wednesday, October 15, 2008
Subscribe to:
Post Comments (Atom)
6 comments:
When I first started commenting on your blog, a few years ago, one of the things I specifically remember grappling with was what problem personalization was really trying to solve.
What I wanted was something beyond simple word sense disambiguation (jordan the furniture store, the basketball player, the country, the Berkeley professor, the friend of mine from college).
It's still not clear to me that personalization is anything beyond this. I feel that I'm still grappling with the overarching story, the "killer app" of personalization. Is it really just word sense disambiguation? I know you said that it's a good way to "start small", but I still don't see how one would grow larger from here.
And didn't Google buy that personalization company in 2003? It's taken them 5 years just to do this word-sense disambiguatory personalization? And even then, the examples didn't work for you? What's up with that?
I'm with Jeremy. Personalized search as practiced seems to be after the wrong problem. It's what leads to sites like RushmoreDrive, which think they can improve relevance based on prior assumptions about your race.
I commend Google for being less presumptuous. But I still think interactive interfaces trump incremental cleverness. Wasn't it a Googler who recently said that more data trumped better algorithms? Why not, to steal a line from Feynman, "just ask" for that data from the user? Asking surely beats guessing.
I'm not sure there's anything realistically more likely to get me to stop using Google than search personalization. (I use "realistically" as a disclaimer to rule out e.g. randomly degrading search quality or reliability, etc.)
I've long since disabled Firefox's "awesome bar", and deeply mourn the loss of simple textfile cookie storage that it murdered in its gestation; 'twas a boon to wget website mirroring, but alas! no more.
I had to install CustomizeGoogle to kill the laggy click tracking on Google web searches, and I have Google personalization turned off.
However, tuning searches according to IP is already a problem: whenever I'm in a different country, Google defaults to a different search page language! How ridiculous is that? It's so bad that when I use my mobile phone to connect, I get sent to google.ie, even though I'm in the UK, because my mobile phone SIM is Irish and the proxy is in Dublin: so it's doubly incorrect. Geolocation by IP is evil and broken. Don't even go there.
Wasn't it a Googler who recently said that more data trumped better algorithms? Why not, to steal a line from Feynman, "just ask" for that data from the user?
Personally, I think you're absolutely right about this.
But Greg and I went over this endlessly a few years ago. And there are two counter-opposed principles at work here. The first is as you say: What the user tells you is orders of magnitude more data than if you constantly try to guess. The second, however, is that the user is lazy, and will not tell you.
Greg believes laziness is more powerful. I believe the opposite, and that we simply haven't yet designed the right interfaces to get the user to understand just how truly valuable their interaction data is. But no matter which of us is right about that, the tension does exist.
Or do you have knowledge, from your users and from the domains in which you work, about just how lazy (or non-lazy?) users really are? In your enterprise search work, or your product search work.. how do users typically behave?
So why not just ask?
Do you want to be lazy and let us guess?
Or do you want to interact a bit?
So why not just ask?
Do you want to be lazy and let us guess? Or do you want to interact a bit?
The reason why search engines do not just ask, at least as far as I understand it, is two-fold: (1) The users don't respond, if you ask (they're too lazy) and (2) since the users don't respond, the "asking" clutters up the interface. And some of the major search engines are wed to the ultra-clean interface.
It's not a matter of us being lazy. It's a matter of users being lazy.
I still think, though, that users are not really lazy. It's that we haven't asked in quite the right way, yet. HCIR is an attempt to figure out better ways of asking.
Greg, Daniel, will either of you be at CIKM at the end of the month? Would be fun to chat.
Post a Comment