This entry is part of a series of postings that consider the challenges of deploying enterprise software into the cloud. I added this topic to the list because it does not really fit into one of my original categories. The topic relates to where you might deploy each part of a stack from end user to core servers and how that might affect the performance of your systems.
One concern that people often bring up is that moving any part of an enterprise deployment model into the cloud will naturally cause degradation in performance because of increased latency. In reality this is not always the case. Read on...
In the conventional (non-virtualized/non-cloud) deployment model the core system server and the web application server will be positioned very close to each other. Often they will be physically in the same data center with a fat umbilical cord of fiber linking them together. So very little latency there and no real bandwidth issues either. The end user connects to the local application server from her office and it serves up the client application into her browser.
The Perceived Cloud Model
The concern comes from taking the core server and web application server out to “the cloud”. Now those servers are in some undisclosed remote location. Now consider what happens when the user navigates around the repository and finds a nice fat 120MB PowerPoint presentation to download. That file needs to get from the web application server to the user’s desktop over the ether. Now she has latency and bandwidth challenges up the wazoo (technical term). The 120MB presentation is going to be downloaded lock, stock and barrel to the user's local desktop. Once it has all made it down (no byte streaming a PPT) she can open it.
It is the addition of this WAN layer into the architecture that concerns users of enterprise systems.
The Reality of the Cloud Model
Take the same business requirement: the user needs to work on her 120MB PowerPoint presentation which is on a server in the cloud. But now add something else that the wonderful world of virtualization gives us - virtualized desktops.
With a virtualized desktop the user uses a client application to connect to the instance of a desktop that is actually running on a server in the data center. The user sees that desktop and can interact with it from her machine but the OS and applications are actually running on the machine in the data center. In this image the user is actually using an iPad but the virtual (remote) desktop is running Windows.
All the user gets on their local machine is a "screen painting" of what is happening on the virtual desktop. So when she opens a 120MB PowerPoint file now it is actually transferred to the virtual desktop (which is running on a server in the data center) and opened there. The user sees the PowerPoint file open in seconds and can edit it, can email it...she can do everything that she would normally without the file ever moving across the ether to their machine.
So, why don’t we just virtualize all desktops? In the world of “choice computing” this might be a reality sooner than you think but don't get me wrong, there’s a litany of pros and cons to this approach. As well as performance there are other positive implications in security, desktop management, virus control, etc. However it might not work for everyone. I'm sitting on a United flight to Australia right now and there's no WiFi on the flight so I can't access a virtual desktop at all, also if you need very high fidelity or need to interact with files on your local machine then today’s virtual desktops might not work for you.
The balance of virtual desktops vs. local thick clients vs. conventional thin clients is going to be a balancing act for a while but for sure the virtualization of the client can make the cloud deployment model a reality for applications where it just may not have made sense in the past.