As you may already know vSphere 4.1 was released last week with much fanfare from VMware, and it’s definitely a worth upgrade, which comes for free (as usual) if you have a support contract.
This new vSphere release, which is a major milestone (as much as it was ESX 3.5 for the 3.0 version) comes with a truckload of new features that others blogger have already covered in depth, here’s a list of the most interesting posts:
Frank Denneman: Load Based Teaming, DPM scheduled tasks and VM to Hosts affinity rule
Duncan Epping: Cluster Operational Status
Chad Sakac: vStorage APIs for Array Integration
But I would like to spend some time talking about the new CPU Scheduler which in my opinion is a great improvement, let’s focus on some of the changes:
– Further Relaxed Co-Scheduling
Now the Co-Scheduling enforcement is a per vCPU operation, it means that the VM is not completely stopped when the accumulated vCPUs skew cross the threshold, it’s just the single lagging vCPU that stops and then need to catch up.
– Elimination of CPU Scheduler Cell
The Cell mechanism worked well with 2 and 4 way vSMP with dual and quad core CPUs, but it was becoming a limiting factor in the 8 and 12 core era. Now the VM can be scheduled on every pCPU (not just in a single cell/socket) available on the system, thus utilizing all the processor cache and memory bandwidth available.
– Wide-VM NUMA Support
That’s an enhancement posed to improve performance in large systems that carry big vSMP VMs. A Wide-VM is a VM that has more vCPUs than the available cores on a NUMA node, like when you have a 4-way vSMP VM on a dual-core AMD Opteron. With ESX 4.1 they can take advantage of NUMA Management.
You can also find a very interesting paper directly from VMware which explain in great detail all the features described above, and shows some benchmarks too.
Originally posted at: http://p2v.it/2010/07/22/vsphere-4-1-and-its-new-cpu-scheduler/