Archived Content

The following content is from an older version of this website, and may not display correctly.

Open-source hardware as a concept is a reality IT vendors (and others in the IT supply chain) are now reckoning with. Open hardware-design crowdsourcing, spearheaded by Facebook’s Open Compute Project (OCP), has crystallized a major market traditional hardware suppliers are finding themselves increasingly left out of: the hyperscale data center.

 

A Facebook, a Google or a Microsoft – when planning to spend their next several hundred million dollars on a data center – is no longer interested in all the value-add features an HP or a Dell have stuck onto their latest server. These customers know exactly what they need and how much they are going to pay for it – nothing less and nothing more – and if you cannot deliver exactly that, there is nothing left to talk about.

 

The same goes for other data center gear: from IT racks to power and cooling systems. This reality is not lost on Emerson Network Power, a long-time supplier of infrastructure products into the data center space, which has recently created a business unit dedicated explicitly to supplying these hyperscale customers. Emerson announced the unit at OCP’s Open Compute Summit in the Silicon Valley in January.

 

A processor group hug
One of the biggest announcements at the summit was based on a simple yet novel idea: why not standardize the processor socket? As Frank Frankovsy, director of hardware design and supply chain at Facebook and OCP’s head, puts it, there really is no good reason to reinvent the wheel. “Let’s quit coming up with all these different socket methodologies and wasting a lot of engineering hours [on] different ways to glue CPUs onto motherboards,” he says.

 

To that end, Frankovsky announced the “group hug board”, a common-slot open-source architecture specification for motherboards that can take CPUs by Intel or AMD, as well as by ARM-chip makers like Applied Micro or Calxeda. All four companies are on board to support the spec, he said. A common processor slot gives these vendors a framework to design around a common motherboard, “so they don’t have to go and innovate where it doesn’t matter.”

 

On the customer end, this approach would help greatly with benchmarking. Now that the ARM ecosystem is growing in strength, there will eventually be so many variants of CPUs that it will be difficult to compare processors’ performance without eliminating all other variables. Today’s benchmark CPU tests are difficult because there is a different system designed specifically around each CPU. “You’re not really sure what generated the performance,” Frankovsy says. Was it the actual CPU core, or was the disk controller or was it some other I/O controller on the board [or] a network interface card?”

 

With a standardized motherboard and CPU slot, you are only changing one variable, which levels the playing field for more accurate.

 

64-bit ARM coming to a server near you
One of the four, Applied Micro, got significantly ahead of the competition in the ARM-based server space by announcing the world’s first 64-bit ARM processor called X-Gene. Calxeda, one of Applied Micro’s biggest rivals in the server ARMs race, said in October it expected to have its 64-bit ARM chip only in 2014. Another competitor, Samsung, has reportedly licensed a 64-bit processor design from UK-based ARM Holdings, but has not announced a roadmap for a product based on the architecture.

 

Vinay Ravuri, VP and general manager of Applied Micro’s processor business unit, says, “We think that we’re at least a year or 18 months ahead of anyone with a 64-bit ARM.” The company already has a number of server vendors exploring the processor. Dell’s chief architect Jimmy Pike showed a prototype Dell motherboard with six X-Gene nodes. Ravuri says he expects multiple system vendors to announce products built around X-Gene chips by the end of 2013.

 

Storage: a 32-bit ARM niche
Applied Micro does not have a 32-bit ARM product, as most servers on the market use 64-bit processors. But it doesn’t mean 32-bit processors cannot find their way into the data center.

 

Calxeda announced a server motherboard for an Open Compute storage chassis called Open Vault at the summit, outfitted with its System-on-Chip (SoC) cards. Intel also announced a server for Open Vault. Both solutions are compatible with into the storage chassis’ SAS expander slot to replace the need to plug each chassis into a central storage server.

 

Open Vault is a highly-dense storage chassis, with up to 30 drives in a 2U box, and was designed to fit into OCP’s Open Rack.

 

Sum of parts is overrated
Facebook and COP have been focusing on rack design for a while, since the typical IT rack of today has not been redesigned in decades. Open Rack is an outcome of this effort. The effort’s goals go beyond designing a rack to hold today’s OCP servers. A glimpse of the rack’s future was announced together by Intel and Facebook: a rack where server components communicate via photonics, and where each component can be replaced independently of others.

 

This is a radical departure from the norm, where if you want to upgrade sever CPUs, you have to replace the servers. But massive data center users like Facebook, just as they have radically changed the way the society interacts today, are on their way to changing the way companies think about their critical infrastructure.

 

This article originally appeared in the 28th issue of the DatacenterDynamics FOCUS magazine. See the digital edition here.