As reported on Wired.
BY CADE METZ
Facebook’s Frank Frankovsky. Photo: Wired/Bryan Frank
Facebook recently ran an experiment. Inside a test lab, somewhere behind the scenes at the world’s most popular network, engineers sidled up to a computer server loaded with software that typically drives the Facebook website and started messing with the CPU.
Every processor includes something called a cache — a place to temporarily store data without sending it all the way back to a machine’s main memory — and with their test machine, these Facebook engineers started shutting down portions of the cache, just to see how their software would respond.
“The cache embedded on a CPU is actually the most expensive memory you can find,” explains Frank Frankovsky, who oversees Facebook’s hardware efforts. “So we said: ‘Let’s see what happens if we start turning off chunks of the cache. Let’s see how small of a cache we could live with.’”
First, they reduced the cache from 3 megabytes to 2. Then to one and a half. And then to one. All the while, the machine performed just as well, handling the same number of requests per second. Speed didn’t degrade until they took the cache all the way down to a half megabyte.
It was just one test — a small piece of knowledge, as Frankovsky calls it, that may help Facebook understand how its software makes use of the hardware running beneath it. But it shows why the market for server chips is about to change in a very big way.
What Frankovsky’s little tale shows is that Facebook doesn’t necessarily need everything that chip makers like AMD and Intel build into today’s server chips — and that it can save an awful lot if can somehow move to chips that are better suited to its particular breed of software. Facebook has already stripped down other parts of the server hardware that drives its massive web empire, and now, it’s looking to strip down the CPUs as well.
“We went vanity-free on the server design,” Frankovsky says. “The next place to go is to look at how to best utilize the componentry, making sure — at the component level — you’re making good use of every ounce of horsepower you’re putting on the motherboard.”
In some ways, Facebook is already doing this. It already works with Intel and other hardware vendors to in some way customize the chips that go into its servers — though the company won’t discuss the details, apparently because its hardware partners haven’t authorized it to do so. But Frankovsky makes it clear the web giant plans on taking things much further.
That Facebook cache test was run with eye on a new breed of chips that’s slowly moving into the server world. Frankovsky calls them “smartphone-class CPUs.” Others call them “wimpy cores.” Basically, they’re ultra-low-power server chips based on architectures that were originally designed for smartphones. Many hardware makers — including big names like Dell and AMD as well as upstarts like Calxeda and AppliedMicro — are working towards servers that use chips based the ARM architecture that drives your iPhone, and Intel has responded to this groundswell with servers chips based on its Atom mobile architecture.
‘Competition drives a lot of really good stuff — and there are more ARM licensees than I can count. When I’ve seen a relatively open and level playing field like that, good things are bound to happen. That level of investment is bound to yield some very cool stuff.’
Some have downplayed the wimpy core idea, questioning whether these chips have the oomph to run server workloads. But the whole idea is to slim things down in the data center — to do more with less — and as Frankovsky shows with his little test, today’s web data centers could use some slimming, at least in some cases.
Certainly, the current breed of ARM chip isn’t up to the task. But a more robust breed is on the way — a 64-bit incarnation that can handle more memory than today’s 32-bit chips — and people like Frankovsky say it’s only a matter of time before these designs provide a viable alternative to chips based on Intel’s x86 architecture. “I think it’s going to shake things up sooner than you think,” Frankovsky told us almost a year ago.
As the pundits argue about the technical merits of these chips, they miss the larger picture. ARM chips are so attractive to people like Frankovsky because they provide more options. ARM is an architecture that’s licensed to a wide range of companies, and it can provide an antidote to the hegemony Intel has long enjoyed in the server world.
“Competition drives a lot of really good stuff — and there are more ARM licensees than I can count,” Frankovsky says. “When I’ve seen a relatively open and level playing field like that, good things are bound to happen…That level of investment is bound to yield some very cool stuff.”
In the end, it gives Facebook more to choose from. It may not need to buy the chip with 3 megabytes of cache. It may have the option of buying a processor with only a half megabyte. “With the number of people that are investing in that ARM ecosystem — since there are so many choices — there’s bound to be somebody that’s building something that’s just about right for you.”
In fact, many of these players are intent on offering hardware that ideally suited to Facebook and other online giants that are looking to hone their data center operations. “The new emerging players that see the shift that’s happening in this market? They’re all ears around some of the customers that they believe are leading indicators for where the CPU architecture should go.”
This was born out just last week when Calxeda and AppliedMicro — two of the companies working on ARM designs — backed Facebook’s plan to split servers into tiny pieces you can easily add and remove as you see fit. At the Silicon Valley get-together where this plan was unveiled, AppliedMicro vice president and general manager Vinay Ravuri told us the company’s 64-bit ARM chip will officially arrive later this quarter. And it will be welcomed.