I was at the Gartner Data Center Conference this week and completely out of my comfort zone which is always an interesting and educational place to be. The week was awash with data center issues, talk of DevOps and XaaS delivery but the topic that most intrigued me was the SDE – the Software Defined Everything. Before explaining what a “Software Defined Something” is, let’s consider what the predecessor to SDE is; we will use for this example the Software Defined Network (SDN).
Software Defined Networking Example
Today, when a data center administrator wants to program a switch to do something he talks directly to the switch. If the physical switch changes he will almost certainly need to change the way in which he talks to the device.
However, in the wonderful world of the Software Defined Network our disturbingly familiar data center operator talks to a software layer called a “controller”. The controller then passes on those instructions to the physical network infrastructure. In a nutshell, a SDN is simply a control layer that abstracts what the operator says away from the real devices that he is talking to.
Note: There’s still a physical network, wires to trip over, routers, firewalls, and switches to pay for, floor space to waste and cooling energy to consume. [1]
There are two key advantages to this approach. If some of the physical devices fail or are replaced the interface between the operator and the SDN controller are not effected so this can be done with a minimal amount of disruption. The second is that the controller layer can make decisions with regards to how best to utilize the physical resources perhaps to meet certain service level objectives.
What about “Software Defined Other Things”?
I used SDN as an example but the talk is now all about an entire “Software Defined Data Center (SDDC or SDD). This includes Software Defined:
- Network – this is addressed above and represents the most mature and well known of the defined systems.
- Compute – this is achievable by using a hypervisor in a virtualized environment to abstract the programmable compute from the physical processor. Technically I’d say that SDC is not generally available today as you really need a controller that is able to intelligently drive the placement of virtual guests into an environment based on the service level objective.
- Storage – this makes a lot of sense. Don’t talk directly to a disk or even a storage array, talk to an interface that abstracts the underlying devices. Tell the controller the service level (speed, level of availability, geo, etc.) and let it find the appropriate device for you.
- Power – I’ve managed to suppress my natural cynicism thus far but from what I heard at the conference and what I’ve read subsequently, this is just the intelligent power management people jumping on the “SD” bandwagon. I get that reducing demand and having predictive supply is super important but I am not really seeing anything that truly fits the open abstraction of the control layer from the physical implementation.
- Software – someone jokingly suggested that we should have “Software Defined Software” – remember 4GL and 5GL systems?
What’s the impetus?
So why is this such a big deal right now? There are a few factors: general demand on datacenters is increasing so anything that makes them more efficient is going to get people’s attention. The second and more important factor is the massive grown in cloud-like systems. One of the key features of a cloud solution is the “on demand” capability that means that systems can be added, scaled up/down and removed in real time. If you need to send someone physically over to a cabinet to plug in a network cable in order to add a system then you’re almost certainly missing the SLAs for your cloud service!
What about the mainframe?
This really does feel like another example of the distributed world trying so very hard to replicate the mainframe. If I stand up a dozen virtual machines on an IFL and define a network grid between them all I will have real Software Defined Compute and Network – without any interconnecting cables or hardware. I can also abstract the storage from the machines. Better still, I can use the existing built-in policy management capabilities of the machine to have it assign resources dynamically to meet the service level objectives. BTW – if you used mainframes you probably wouldn’t even need to worry about power, never mind having to drive it via SDP.
Footnote: If you are familiar with how a switch, router or firewall really works then you might be wondering what the SDN really brings to the party. When you communicate with a physical device you are still talking to software. (Under the covers, these devices are actually just a PC with some network ports and software that routes, switches or governs network traffic.) The difference is that with a SDN the controller is abstracted from the physical devices so it can sit above a multitude of different types of hardware – it normalized all of the capabilities from the platform.
[1] Unless you’re using a mainframe that is.
If you're especially interested in the Software Defined Storage case then head over to http://cto.vmware.com/2014-prediction-democratization-storage-management/ and take a look at Christos Karamanolis' overview of SDS. It's a very succinct description.
Posted by: Andrew Chapman | 01/05/2014 at 03:04 PM