Google hired a pair of very bright industrial designers to figure out how to cram the greatest number of CPUs, the most storage, memory and power support into a 20- or 40-foot [shipping container]. We're talking about 5000 Opteron processors and 3.5 petabytes of disk storage that can be dropped-off overnight by a tractor-trailer rig.Now, it appears that Microsoft may be getting into the act. Windows Live Core architect James Hamilton wrote a paper, "Architecture for Modular Data Centers" (.doc), that shows considerable thought into how you might squeeze a data center into a shipping container.
Extended excerpts from the paper:
[We propose] to no longer build and ship single systems or even racks of systems. Instead, we ship macro-modules consisting of a thousand or more systems.There also is a PowerPoint presentation (.ppt) that covers much of the same material as the paper.
Each module is built in a 20-foot standard shipping container, configured, and burned in, and is delivered as a fully operational module with full power and networking in a ready to run no-service-required package. All that needs to be done upon delivery is provide power, networking, and chilled water.
Components are never serviced and the entire module just slowly degrades over time as more and more systems suffer non-recoverable hardware errors ... Software applications implement enough redundancy so that individual node failures don't negatively impact overall service availability ... At the end of its service life, the container is returned to the supplier for recycling.
This model brings several advantages: 1) on delivery systems don't need to be unpacked and racked, 2) during operation systems aren't serviced, and 3) at end-of-service-life, the entire unit is shipped back to the manufacturer for rebuild and recycling without requiring unracking & repackaging to ship.
A shipping container is a weatherproof housing for both computation and storage. A "data center," therefore, no longer needs to have the large rack rooms with raised floors that have been their defining feature for years ... The only requirement is a secured, fenced, paved area to place the containers around the central facilities building.
On-site hardware service can be expensive ... [we avoid] these costs ... Even more important ... are the errors avoided by not having service personal in the data center ... human administrative error causes 20% to 50% of system outages.
The macro-module containers employ direct liquid cooling ... No space is required for human service or for high volume airflow. As a result, the system density can be much higher than is possible with conventional air-cooled racks .... High efficiency rack-level AC to DC rectifier/transformers [also may yield] significant power savings.
This architecture transforms data centers from static and costly behemoths into inexpensive and portable lightweights.
[James Hamilton paper and talk found via Mary Jo Foley]
Update: Nick Carr writes about Rackable Systems' Concentro and Sun's Blackbox in his post, "Showdown in the trailer park".
Update: One year later, Microsoft announces their new Chicago data center has an "entire first floor is full of containers ; each container houses 1,000 to 2,000 systems per container ; 150 - 220 containers on the first floor." A total of 200k - 400k servers. Wow, quite a build-out. [Found via James Hamilton and [Nick Carr]