Microservices has become the latest buzzword in IT as a new approach to deploying applications and services in the cloud. But much of the debate around microservices has centred on whether containers or some other approach is best for implementing them, while Red Hat said that the API should be the focus.
Enterprises and service providers are looking for a better approach to deploying applications in a cloud environment, and microservices is being heralded as the way forward. By breaking down applications and services into smaller, loosely coupled components, they can be made more scalable and easier to develop, or so the theory goes.
A lively debate sprung up at the recent OpenStack Silicon Valley conference in California over whether containers or virtual machines are best suited for implementing microservices.
Containers are more lightweight and faster to deploy, argued one side, while virtual machines are a more mature technology and offer better isolation between workloads, according to those backing the other side of the argument.
But this debate is somewhat missing the point, according to Arun Gupta, Red Hat’s director for technical marketing and developer advocacy. If the aim of microservices is to make it easier to build and deploy applications, the underlying technology should not matter so long as the API layer remains the same, at least for the developer.
“I don’t care what these containers are. They are self-contained. Today they could be JBoss, tomorrow they could be Node.js, then Fuse or [Apache] Camel. I don’t care really, because they are using RESTful APIs to talk to each other, and as long as they are using RESTful APIs, I’m cool,” he said.
Red Hat is currently using Docker containers and Kubernetes, the orchestration framework for containers developed by Google, as part of its OpenShift platform-as-a-service offering for developing and operating cloud-based applications.
This is because the combination of Kubernetes and Docker is currently the most mature technology, with Kubernetes providing vital functions needed for a successful microservices deployment, such as service discovery, container management and communication between the components.
This may change in the future owing to developments such as the OpenStack Magnum project, which is intended to serve as a framework for supporting all container technologies, but would depend on what Red Hat’s customers see as a requirement, Gupta explained.
“OpenShift v3.0 is based on Kubernetes, but we are a part of all those initiatives like Open Container, and as one or another technology becomes more prominent and relevant to our customers, we invest in it accordingly,” he said.
“So, in three years, four years, whatever, if OpenStack Magnum becomes very relevant, we can abstract that within OpenShift so that Kubernetes becomes just one of the things, but today Kubernetes and Docker are the kings.”
Gupta was keen to promote Red Hat’s comprehensive software stack, saying that the firm is “very well positioned” to deliver a microservices architecture.
“All the way up from OpenStack at the bottom, you’ve got the RHEL platform, on top of that you’ve got OpenShift, using Docker and Kubernetes, and on top of that you’ve got the JBoss Application Server, and on top of that there is developer tooling using JBoss Developer Studio,” he said.
“OpenShift allows you to do public cloud or private cloud or hybrid cloud. JBoss middleware is our primary [app] platform, but we understand not everyone wants Java so we provide Vert.x, which is a true polyglot asynchronous application framework. With the acquisition of FeedHenry last year, we have Node.js applications as well as a mobile story.”
However, getting back to microservices, Gupta drove home his point that users should be looking at the API layer of the stack.
“When you design your microservices, all the concerns you are looking at are really application-level concerns,” he said.