…by spending more money on it.
I know that it seems both counterintuitive and self-serving for me to suggest that the strategy for reducing your mainframe costs is to spend more but consider the following math.
- Assume that you spend $1M per year to run your mainframe. This includes the hardware, software, data center and personnel costs, power, etc.
- Now let’s assume that for that $1M you get to process 100 million “units of work” (UoW) per year.
- That gives you a cost of 1 cent per UoW.
Let’s then assume that you double down on your mainframe strategy by enabling a couple of dozen IFLs and you throw a few hundred Linux guests on the machine. Let’s assume that these machines process 20 million UoW.
- Many of your fixed costs will not increase one iota – the space utilization, power, cooling and many day-to-day management costs will not increase. The MIPS-related software costs will not increase (IFLs don’t increase a machine’s MIPS rating). You’ll have some incremental costs directly related to hosting the Linux guests but they will be fairly minor.
- Now your workload has increased 20% to 120 UoW.
- Your costs will have increase only a few percentage points
Your cost just reduced to significantly less than 1 cent per UoW.
Why is this news and why should IT care?
This might seem like just a fictitious mathematical argument bit it’s important because it relates to scale. If there’s one thing that the private cloud has brought us, it is scale issues. If you want to double the capacity of a distributed environment you can assume that your costs will just about double in line with this. On a mainframe the fixed cost portion of the system may look high but the variable cost as you scale is typically very minor.
We are seeing very aggressive growth curves for private-cloud systems in customers’ datacenters. If companies wake up to the economics of the mainframe as a platform they are going to have a potentially significant competitive advantage.
Comments