Googler Corinna Cortes just posted a bunch of links on the Google Research Blog, including a link to talks various researchers have done at Google.
Following that, I discovered a talk that Brown University Professor Tom Dean did at Google in Jan 2006 called "Scalable Learning and Inference in Hierarchical Models of the Neocortex".
Eyes glazing over? Okay, right, that might be an intimidating title.
But, really, if you have any interest this kind of thing, check out the talk. It is a fascinating discussion of how parts of the brain work and techniques for pattern recognition inspired by neurological processes. Tom does a great job with the talk, so I think it should be reasonably accessible even if you have no background in this stuff.
Even if this kind of thing is old hat for you, you still should check out the talk starting at around 44:00. That's when Tom Dean starts talking about how to parallelize a hierarchical Bayesian network across a cluster of computers.
Of particular interest to me was that he presented MapReduce code for the computation and seemed to be arguing he could do very large scale Bayesian networks in parallel on the Google cluster. I was surprised by this -- I would have thought that the communication overhead would be a serious issue -- but Tom was claiming that the computation supported coarse-grained parallelism due to the hierarchical structure of the models.
If this really is true, it would be a fascinating application of the massive computational power of the Google cluster. Maybe it is my inner geek talking, but I'm drooling just thinking about it.
If you want to dig in more, Tom Dean also has three recent papers on this work. I haven't gotten to his new papers yet, but I will soon.
Update: I read the three papers. I'm quite a bit less excited now.
It seems like this work is further away from demonstrating interesting results at massive scale than I first thought. Experimental results focused on toy problems, like the handwriting recognition described in the talk, and only had modest success on that problem. Communication overhead in large networks appears to be significant -- as I suspected at first -- and it is not clear to me that this could run effectively at scale on the Google cluster.
It appears I may have been too hasty in getting so excited about this work.
Saturday, March 04, 2006
Subscribe to:
Post Comments (Atom)
2 comments:
Accompanying the video, I found those slides (PDF) rather helpful.
A higher resolution copy of the slides is very helpful. Thanks, Chris!
Post a Comment