I enjoyed
Tim O'Reilly's post with thoughts from Tim and others on the changes we may see with increased hardware parallelization.
Like them, I was struck by the news of an
80 processor chip prototype from Intel and wondered what changes it might cause.
However, as I think about this, I suspect the changes we will see will go well beyond increased use of threaded programming to parallelize one task or data mining frameworks like MapReduce.
So, if you do not mind, please indulge me while I go all wacky visionary with this post.
With hundreds of processors available on the desktop, I think we will be moving toward a model of wildly speculative execution. Processors will be used to do work that may be necessary soon rather than work that is known to be necessary now.
Modern, single core, pipelined processors already do this to a very limited extent.
Speculative execution sends a processor down the the most likely path of a conditional branch, executing a few cycles of machine instructions that may have to be thrown away if the branch prediction was incorrect.
What I think we may see is a radically expanded version of speculative execution, running code seconds or minutes ahead of when it may be needed. Most of this work will be thrown away, but some will be available later just when it is needed.
It is easier to imagine how this might work for some tasks than others. For example, I could imagine a
speech recognition engine running on your desktop that simultaneously runs hundreds of models analyzing and voting on what you have said and what you are about to say. I could imagine a
computer immune system that was using a fraction of the processors to search for anomalous patterns in the usage of the rest of the hardware, growing in size as potential threats are detected, shrinking away as the threat passes.
I think our model for programming for many tasks may move from one of controlled, orderly execution of code to one of letting loose many competing predictive models executing in parallel.
In that sense, I think Larry Page was right when he
said, "My prediction is that when AI happens, it's going to be a lot of computation, and not so much clever blackboard/whiteboard kind of stuff, clever algorithms. But just a lot of computation." Larry then went on to compare this vision of computer AI to how the brain works.
The brain is a giant pattern matching, prediction engine. On its 100,000,000,000 processors, it speculatively matches patterns, creates expectations for the future, competes potential outcomes against each other, and finds consensus. These predictions are matched against reality, then adapted, improved, and modified.
With a few hundred processors, we are a long way from the parallel processing abilities of the human brain. Yet, as we look for uses for the processing power we soon will have available, I suspect the programs we see will start to look more like the messy execution of the prediction engine in our head than the comfortable, controlled, sequential execution of the past.
Update: Nine months later, Andrew Chien (Director of Intel Research) says something similar in
an interview with MIT Technology Review:
Terascale computing ... [is] going to power unbelievable applications ... in terms of inference. The ability for devices to understand the world around them and what their human owners care about is very exciting.
In order to figure out what you're doing, the computing system needs to be reading data from sensor feeds, doing analysis, and computing all the time. This takes multiple processors running complex algorithms simultaneously.
The machine-learning algorithms being used for inference are based on rich statistical analysis of how different sensor readings are correlated, and they tease out obscure connections.
[Chien interview found via
Nick Carr]