I was listening to the VMware Community Podcast with one of their Advisory practice leaders. He made a comment I found to be pretty interesting. He mentioned how organizations are moving away from the private cloud as the mechanism for transformation and moving toward the software-defined data center (SDDC). This made me consider how is software-defined is different from private cloud. After all can’t you use SDDC architecture to build a private cloud? Come to think of it can’t you use private cloud as part of your SDDC strategy? So, what’s the difference?
I think of software-defined as the method in which you build and manage your infrastructure. Software-defined is an approach to design that enables the infrastructure to be managed via API’s. It is comprised of different building blocks such as software-defined storage (STS), software-defined networking (SDN), compute virtualization and related services. It’s an abstraction of the data center’s infrastructure.
Software-defined dependency on the hardware layer
Once you abstract the physical infrastructure you break the reliance on vendor specific implementations. Of course, this means that there needs to be either standards between the software abstraction and the physical hardware in order to properly manage and scale the SDDC. An example of this is storage. If you choose to implement STS on hardware that you own, you still need to know the operational details of the underlying hardware. Think of all of the alerts, the performance data and provisioning that’s done using the storage vendor’s management tools. In a full STS implementation, these tools need to be integrated at the abstraction layer.
It gets even more complicated once you start to add storage from other vendors or decide to adopt a “bring your own disk” (BYOD) strategy where you build SAN on commodity x86 hardware. The same challenges exist in SDN. Managing the physical infrastructure doesn’t go away in the software-defined world. However, the act of abstracting the data center infrastructure has the advantage of being recursive in nature. The abstracted infrastructure can use abstracted infrastructure.
This is sometimes called the ultimate test of virtualization – The ability for a virtualized system to run a virtualized instance of itself. In x86 compute, it’s called nested virtualization. I’ve used it to build many of the labs we do on Virtualizedgeek. Once software-defined is a mature eco-system between hardware vendors and software interfaces you’ll be able to build software-defined infrastructure atop of other software-defined infrastructure.
Private cloud is a service
Bring the discussion back to private cloud; private cloud is a service, as opposed to technical architecture. There is no requirement for private cloud to be provided via virtualization, so you can easily get managed private cloud if you wanted. According to the NIST definition of cloud, it’s a service that is on demand, has broad network access, incorporates resource pooling, has rapid elasticity and is a measured service. Private cloud is a service that could either replicate a traditional infrastructure or provide virtualized instances of infrastructure components. We can keep the STS use case going.
One could build or subscribe to a private cloud service that provides storage as a service. This could be object oriented, NFS or even block based. To manage and provision the storage the service provider has the option of giving you API access to the storage as the way Amazon Web Services (AWS) provides. Or the service provider could carve out part of an array and give you command level access to the storage array. The point is to illustrate that cloud is a service while software-defined is architecture.
You can incorporate software-defined as part of your private cloud build out and you can leverage private cloud as part of the software-defined data center.