As reported on Wired.
BY ROBERT MCMILLAN
Is this the future of the data center? Photo by Ariel Zambelich/Wired
A few years ago, Christos Kozyrakis was looking for something new. He’d been teaching computer science at Stanford for nearly a decade, and he thought that spending some time as a visiting professor at Microsoft might be fun.
In 2010, he spent a few weeks at Microsoft’s campus in Redmond, Washington, and he continued to collaborated with Microsoft’s researchers for several months after he left. What he learned during this working sabbatical could help big-name web companies save some big money inside the data centers that drive their online services — and change the way we think about the computer server.
Over the past decade, the very concept of the server has evolved. Once, servers were giant machines jam-packed with processors and memory that focused on processing speed above all else. But nowadays, most servers are smaller and cheaper, and they consume less power. Services like Google Search and Microsoft Bing run on thousands of commodity machines, not the big beefy database servers hawked by companies like Oracle. When you’re serving millions of people across the globe, you can’t afford those power-hungry machines.
This year, in an effort to support the Googles and the Microsofts, startups like Calxeda and Marvell are experimenting with a new breed of super-low power processor based on the ARM chip designs that you can already find in your mobile phone. But Kozyrakis says there’s another big way to reduce power in the data center. He thinks that Google and Microsoft can also benefit from the low-power memory chips you’ll find in an iPhone.
That’s because the type of jobs handled in a big internet data center are very different from the workloads that server memory was designed for. With traditional software applications, a chip swaps data with memory so quickly that it can use up more than 100 GB per second of bandwidth. But things are different over in the web data center. Companies like Facebook and Microsoft like to fill server memory with as much data as possible so they can return search results or timeline updates as quickly as possible. And that means that the chips don’t access memory nearly as often.
In Microsoft’s labs, Kozyrakis and his colleagues studied and stress-tested the Bing search engine and another piece of data-sifting software, similar to Hadoop, called Cosmos. They found that these programs used a lot of processing power, but they only used between 6 and 9 percent of the server’s memory bandwidth. That’s a big difference from the world of business software, including old school databases. “It’s not that these applications don’t need bandwidth,” Kozyrakis says. “They do. They just don’t need it as much.”
So Kozyrakis thinks that data centers should explore low-cost mobile phone memory, much in the way they’re experimenting with processors that are based on mobile phone designs.
Christos Kozyrakis.Photo: Stanford
The DDR3 memory that ships with Xeon servers today uses about five times the power of the lower-bandwidth LPDDR2 memory you can get in mobile phones. For some jobs, low-bandwidth, low power LPDDR2 might just do the trick, Kozyrakis says.
The man in charge of Microsoft’s server engineering, Kushagra Vaid, calls the idea innovative, but he says it would take a lot of work to adapt server processors to work with mobile chip memory. “They’re thinking outside of the box,” he says. “They thought of a very creative way to create mobile memory from the mobile ecosystem and find a way to make it more server friendly. But that said… the hardware ecosystem — especially the memory manufacturers — have to buy into this concept as well.”
If the server industry’s experience with low-power processors is any indication, that will only happen when people like Vaid start to twist the arms of chipmakers, demanding more power-efficient products. That’s what Jonathen Heiliger — then Facebook’s vice president of technical operations — did at a San Francisco conference in June 2009. It took chipmakers a few more years to get the message.
And if Facebook thinks that memory power consumption is about to become a hot-button issue, they’re not saying so. The company declined to comment for this story. As did Intel. DRAM makers Samsung and Hynix couldn’t provide comment either.
But AMD — the scrappy Intel competitor that that recently raised eyebrows by licensing an ARM design for its server chips — did want to talk about this. Their position: upcoming low-power server memory technologies things like the Hybrid Memory Cube or High Bandwidth Memory standard will improve power performance in server chips. Phone memory, the company says, probably won’t be needed.
If AMD ever changes its mind, it will have some work to do. A decade ago, servers used special chips — called memory controllers to manage the flow of data in and out of the computer’s memory. Today, those memory controllers are built right into the the server chips themselves. So the kind of microprocessor that Kozyrakis envisions would either have to have a new low-power memory controller built right into it, or it would need to move the memory controller off-chip.
These mobile phone chips would also need some work to so they could be configured with the error-correcting code that servers require, but companies like HP are already exploring ways that this could be done.
So that leaves Kozyrakis’s dream of cell-phone powered server memory in a kind of limbo — technically feasible, but waiting for a champion.
But that could change in the next few years. Because chip performance is improving faster than memory density, data centers are using a bigger slice of their energy to power memory chips than they did a decade ago — a trend that looks like it will make memory power a bigger issue in the future.
Microsoft’s Vaid agrees that it’s a growing problem, especially with applications like search or big data software like MemcacheD. “In those applications, what we see is anywhere between 10 to 20 percent of our server power goes into memory,” he says. “That’s a big number.”