In this entry, I am going to explain some of the hidden secrets related to processors in a mainframe in terms that a non-mainframe person could understand. I’m pretty sure that they are not supposed to be secrets, but I can tell you from personal experience that they seem to be fairly well hidden.
The idea for this post came about when I went searching for a picture of an Integrated Facility for Linux (IFL) but could not find one. I was expecting to find a picture or something like a blade. Let’s face it:a single IFL can support hundreds of Linux guests, so it must have a significant amount of hardware associated with it… you’d think. When people talk about IFLs, they make them sound like something that if you dropped on your foot you’d be exhaling an expletive or two; yet I could not find a picture of someone adding one to a mainframe.
So, here’s the dirty little secret that I’m convinced that IBM doesn’t want you to know…
“There’s no such thing as an IFL.”
When I say that “there’s no such thing”, I don’t mean it in the same way as the tooth fairy, dilithium crystals or unicorns… they exist, but they are not something that you have to ship, unpack, and install, or drop on your foot.
So what are they?
Before we can understand what we are actually describing when we talk about an “IFL”, we need to understand the relationship between modules, chips, cores, and the specialized hardware that performs the actual code execution on a mainframe. We will use the zEC12 as an example, but if you are thinking about the lower end zBC12 system then just know that it uses the zEC12 chips individually not on the MCM and the clock speeds are set slightly slower.
So, onwards with the mainframe equivalent of Dem Bones.
The EC12 mainframe chassis can contain up to four multi-chip modules (MCMs) (see image). The MCM looks and feels like an unglazed clay floor tile with chips glued on it. The six square chips are the actual zEC12 chips, and the two rectangular chips are level 4 caches that the six chips can use to communicate with each other.
Each of the six zEC12 chips on the MCM contains 6 cores, each able to be enabled or disabled independently. In theory, you could enable just one of these cores. You may have guessed that imbedded within the single chip is the level 1, 2 and 3 cache that the cores use themselves and to communicate with each other.
Each of these cores contains six execution units (two integer units, two load-store units, one binary floating point unit and one decimal floating point unit). Note that the zEC12 chip can decode three instructions and execute seven operations in a single clock cycle (ref).
Here’s my attempt at mapping this out as a set of relationships…
If you do the simple math for the maximum device loading, you’ll come up with something like this:
- 4# multi-chip modules
- 24# zEC12 chips
- 144# Cores
- 864# execution units
- 1,008# executed operations per clock cycle – note that the clock speed in the EC12 is 5.5 GHz (4.2 in the BC12).
In reality, it’s not quite this generous as the system reserves specific cores for its own operations and it does not actually make all remaining cores available to the users. In fact only 120 of the 144 cores are enabled on a fully loaded zEC12 and you “only” get to use up to 101 of them. If that’s not enough you can actually cluster mainframes together into something called a parallel Sysplex, but we will leave that for another day.
OK, so what is an IFL then?
When your mainframe ships from the factory, it might arrive with 2 general purpose (GP) processors, 2 IFLs and a zIIP specialty processor “installed”. What this actually means is that 5 cores are enabled on a MCM – 2 of them are designated for use as GP processors, 2 are designated to act as IFLs and one will perform zIIP workloads. There’s nothing physically different about any of the 5 cores; it’s just what the system will allow to execute on them that differentiates one from the other.
When you ask for a new IFL to be “installed”, you don’t have to wait for a parcel to arrive or for a technician to bring in his toolkit. IBM can dial in to the mainframe and simply “tell” one of the available cores to be available as an IFL. In fact, in many cases customers can even do this themselves and just forward a check over to IBM to cover the costs.
So IFLs are not something that you slot into a machine, they are already there waiting to be used…this is how we can make the crazy claim that you can add capacity for 500 Linux guests to your datacenter without adding any requirements for floor space, power or cooling. Also, no extra network equipment, cables/routers/firewalls/etc. and a minimal increase (if any) in system management. Makes you wonder why you are not doing it, doesn’t it?
Cores and machine names
Now that you know what a core is in System z terms the machine names might make a little more sense. An H06 has 6 cores, and the H13 has 13 cores, you can work out the rest for yourself.