HPE is making a bet that enterprises will be prepared to invest in new data centre infrastructure that offers them the flexibility to run existing applications alongside speedily delivered cloud-native applications and services, all thanks to automation that provides the resources as and when they are needed.
However, while HPE’s Synergy platform sounds impressive, there are doubts over whether it really differs a great deal from other hyperconverged infrastructure platforms offered by other vendors. Furthermore, the hardware is not based around existing HPE systems such as its ProLiant servers, meaning that Synergy is effectively going to be for new build infrastructure.
Announced at the HPE Discovery event in London at the start of December, HPE Synergy is due to be available during the second quarter of 2016. It is based around HPE’s concept of “composable infrastructure”, under which all of the compute, storage and network resources are combined into a pool and can be allocated as required by applications and services.
To anyone who has been watching developments in the data centre with virtualisation and cloud computing, this will have a familiar ring to it. Numerous hardware platforms such as VCE and Cisco’s UCS have sprung up over the past few years that aim to better integrate servers with storage and some kind of interconnect fabric to deliver an optimised platform for such things as operating virtual machines.
But according to HPE, these kind of platforms are a compromise, and are really only suitable for running specific workloads, while Synergy can be used to run anything.
“HP introduced the notion of converged infrastructure about six years ago, but it’s no longer enough to help customers get to market fast enough. Then hyperconverged infrastructure came along, but customers have found themselves dealing with compromises because these systems are only good for certain workloads,” said Paul Miller, vice president of marketing for converged systems at HPE.
In contrast, Synergy is a new class of infrastructure that is “defined by code”, according to Miller, in keeping with the notion of the software defined data centre (SDDC). Another HP executive likened it to an engineered system such as Oracle’s Exadata or Exalogic platforms, but optimised for general purpose computing.
Not everyone sees it this way. Ovum principal analyst Roy Illsley said that HPE’s new platform “looks pretty much like a hyperconverged solution that just about everybody else has got,” although he conceded that the orchestration and management side of the platform look interesting.
Key parts of the management software include the HPE OneView Composer and HPE Image Streamer, both of which are embodied in the Synergy infrastructure inside physical appliances that oversee the modules that make up the compute, storage and network resources.
Composer automatically discovers and configures any modules it finds, while Image Streamer stores and delivers bootable images for whatever environment a compute node needs to handle a specific workload, such as a virtual machine.
HPE’s vision is that the IT department in an organisation will use Composer and Image Streamer to create a catalogue of templates that comprise all of the software components required to handle a specific workload, whether that is operating a private infrastructure as a service (IaaS) cloud, a database cluster, or a web application stack.
“With a fluid pool of resources, a developer can pull out the assets they require as they are needed. When he’s done, he can free those assets back to the pool for use by other applications and workloads. This will allow, when one of the lines of business calls up and says they need a platform to develop a new app, for that to happen in minutes rather than days or weeks,” said Miller.
This level of flexibility is what marks out Synergy, according to Andy Buss, Consulting Manager for Data Centre Infrastructure and Client Devices at IDC Europe.
“It’s a stateless computer, designed to be a reconfigurable pool of resources,” he said. “The best way to think about it is a set of storage, networking, compute and memory resources that you can divvy up using software to define your virtual computer for whatever task you want,” he explained.
“The whole deal here is that if you’ve got a programmable infrastructure, you can apply templates very easily against that, because it is stateless and everything can be programmed, so the template defines how and where and what it works on. That to me is the key of the design,” Buss added.
HPE is keeping some of the details regarding Synergy under wraps until the platform actually ships, including the exact specifications of the fabric used to interconnect the various modules. This is claimed to be a sophisticated non-blocking interconnect that allows for high-speed communication between all the different resources.
The specifications of the compute nodes have also yet to be disclosed, although HPE stated that these are all-new hardware and not repurposed ProLiant blades. At launch, there will be 480, 660, 620, and 680 Compute Modules available. These numbers correspond with existing ProLiant models, and so may offer a clue as to their specifications.
Meanwhile, storage modules hold 40 SAS drives, with a non-blocking 12Gbps SAS fabric that enables any drive or group of drives to be mapped to any compute module.
According to Buss, this approach is “not like full-on storage arrays, but it’s also not like software-defined storage where you put direct attached drives into each server and then aggregate them. It’s somewhere in between”.
The compute, storage and networking modules all fit into a 10U rack-mount enclosure, alongside the Composer and Image Streamer modules. Customers can scale by adding additional enclosures, with two Composers capable of managing a deployment of up to five racks, according to HPE.
However, the fact that the hardware has all been designed from the ground up for Synergy means that it is essentially targeting new build infrastructure, much in the same way that pre-integrated solutions such as the VCE Vblock system are.
Like the vendors of those systems, HPE seems to expect customers to gradually replace their legacy infrastructure with Synergy kit over time, and this brings issues of its own.
“My concern is, are they forgetting about those people that have a data centre with racks and storage and networking already? If you put this into an existing data centre that is already cabled up in a certain way, the challenge is how to integrate it,” said Illsley.
“HPE might be assuming that there are enough people getting to the point where they’re going to be replacing a significant portion of their data centre that just ripping out and replacing with something new in an existing data centre is cost effective, rather than just this rack and that storage array,” he added.
Buss agreed, saying that Synergy is just like every other another integrated infrastructure vision, in that you have to buy into the notion of getting everything from the one vendor.
“That means you have to buy into HPE’s vision to get the full benefit. If you’re looking at a lot of third-party kit, you’ve got to think long and hard about how you try to integrate it, or whether you don’t and you just start anew, building up new infrastructure based on Synergy,” he said.
If customers do this, their Synergy deployment will end up being yet another silo they have to manage alongside everything else,
“My biggest question mark is how HPE is going to deal with the multi-vendor element, bringing that into the automation and orchestration that they do?”
“If it’s one vendor only, then you need to have a another tool that does multi-vendor co-ordination and orchestration across the rest of the data centre, and the question then becomes which tool is responsible, which is the strategic tool versus tactical, and I think for this reason HPE needs to look at multi-vendor management,” he said.
“Automation and orchestration only makes sense at the entire data centre level, so the key question that HPE needs to answer is whether it wants to play at the software defined data centre level, or does it want to be a provider of software defined data centre infrastructure that is managed by someone else?” Buss added.
But if HPE can be believed, Synergy’s automated, self-managing platform will free up valuable IT staff to focus on building new services and facilities, rather than simply managing the existing infrastructure and keeping the lights on.
“This is what customers tell us they are looking for,” said Miller.