THE PUBLIC SHAMING OF RESOURCE POOL-AS-A-FOLDER USER
Yesterday there was some public shaming done of Antony Spiteri. He was outed that he was using vSphere resource pool as folders. https://twitter.com/davidhill_co/status/988797652346245126 A funny thread and he truly deserved all the public shaming by the community members ;). All fun aside, using resource pools as folders are not recommended by VMware. As I described in the new vSphere 6.5 DRS white paper available at vSphere central: Correct use: Resource pools are an excellent construct to isolate a particular amount of resources for a group of virtual machines without having to micro-manage resource setting for each individual virtual machine. A reservation set at the resource pool level guarantees each virtual machine inside the resource pool access to these resources. Depending on the activity of these virtual machines these virtual machines can operate without any contention. Incorrect use: Resource pools should not be used as a form of folders within the inventory view of the cluster. Resource pools consume resources from the cluster and distribute these amongst its child objects within the resource pool; this can be additional resource pools and virtual machines. Due to the isolation of resources, using resource pools as folders in a heavily utilized vSphere cluster can lead to an unintended level of performance degradation for some virtual machines inside or outside the resource pool. Understanding this behavior allows you to design a correct resource pool structure. Currently, I’m working on a new vSphere DRS Resource Pool white paper which sheds some new light on the distribution of resources under normal conditions and under load (the Resource Pool Pie Paradox). I will keep you posted!
PUBLIC SPEAKING SCHEDULE
The VMUG season has started, and I have a few speaking sessions at various events. I thought it might be convenient to list the events and topics: Date: February, 22 Organization: North East UK VMUG Location: Newcastle Topic: VMware Cloud on AWS from a resource management perspective Date: March, 7 Organization: Swiss-French VMUG Location: Lausanne Switzerland Topic: VMware Cloud on AWS from a resource management perspective Date: March, 8 Organization: Swiss-German VMUG Location: Zurich Switzerland Topic: VMware Cloud on AWS from a resource management perspective Date: March, 20 Organization: Dutch VMUG Location: Den Bosch Netherlands Topic: vSphere Resource Kit Double-Hour Session 1: vSphere 6.5 Host Resource Deep Dive with Niels Hagoort Session 2: vSphere 6.5 Clustering Deep Dive with Duncan Epping Date: March 29, Organization: Virtual VMUG Location: Online Topic: VMware Cloud on AWS from a resource management perspective Date: April 10, Organization: Turkey VMUG Location: Istanbul, Turkey Topic: VMware Cloud on AWS from a resource management perspective Date: May 24 Organization: Czech Republic VMUG Location: Prague Topic: vSphere 6.5 Host Resource Deep Dive with Niels Hagoort Hope to see you there
VIRTUALLY SPEAKING PODCAST #67 RESOURCE MANAGEMENT
Two weeks ago Pete Flecha (a.k.a. Pedro Arrow) and John Nicholson invited me to their always awesome podcast to talk about resource management. During our conversation, we covered both on-prem and the features of VMware Cloud on AWS that help cater the needs of your workload. Being a guest on this podcast is an honour and times flies talking to these two guys. Hope you enjoy it as much as I did.
VSPHERE 6.5 DRS AND MEMORY BALANCING IN NON-OVERCOMMITTED CLUSTERS
DRS is over a decade old and is still going strong. DRS is aligned with the premise of virtualization, resource sharing and overcommitment of resources. DRS goal is to provide compute resources to the active workload to improve workload consolidation on a minimal compute footprint. However, virtualization surpassed the original principle of workload consolidation to provide unprecedented workload mobility and availability. With this change of focus, many customers do not overcommit on memory. A lot of customers design their clusters to contain (just) enough memory capacity to ensure all running virtual machines have their memory backed by physical memory. In this scenario, DRS behavior should be adjusted as it traditionally focusses on active memory use. vSphere 6.5 provides this option in the DRS cluster settings. By ticking the box “Memory Metric for Load Balancing” DRS uses the VM consumed memory for load-balancing operations. Please note that DRS is focussed on consumed memory, not configured memory! DRS always keeps a close eye on what is happening rather than accepting static configuration. Let’s take a closer look at DRS input metrics of active and consumed memory. Out-of-the-box DRS Behavior During load balancing operation, DRS calculates the active memory demand of the virtual machines in the cluster. The active memory represents the working set of the virtual machine, which signifies the number of active pages in RAM. By using the working-set estimation, the memory scheduler determines which of the allocated memory pages are actively used by the virtual machine and which allocated pages are idle. To accommodate a sudden rapid increase of the working set, 25% of idle consumed memory is allowed. Memory demand also includes the virtual machine’s memory overhead. Let’s use a 16 GB virtual machine as an example of how DRS calculates the memory demand. The guest OS running in this virtual machine has touched 75% of its memory size since it was booted, but only 35% of its memory size is active. This means that the virtual machine has consumed 12288 MB and 5734 MB of this is used as active memory. As mentioned, DRS accommodate a percentage of the idle consumed memory to be ready for a sudden increase in memory use. To calculate the idle consumed memory, the active memory 5734 MB is subtracted from the consumed memory, 12288 MB, resulting in a total 6554 MB idle consumed memory. By default, DRS includes 25% of the idle consumed memory, i.e. 6554 * 25% = +/- 1639 MB. The virtual machine has a memory overhead of 90 MB. The memory demand DRS uses in its load balancing calculation is as follows: 5734 MB + 1639 MB + 90 MB = 7463 MB. As a result, DRS selects a host that has 7463 MB available for this machine if it needs to move this virtual machine to improve the load balance of the cluster. Memory Metric for Load Balancing Enabled When enabling the option “Memory Metric for Load Balancing” DRS takes into account the consumed memory + the memory overhead for load balancing operations. In essence, DRS uses the metric Active + 100% IdleConsumedMemory. vSphere 6.5 update 1d UI client allows you to get better visibility in the memory usage of the virtual machines in the cluster. The memory utilization view can be toggled between active memory and consumed memory. Recently, Adam Eckerle on Twitter published a great article that outlines all the improves of vSphere 6.5 Update 1d. Go check it out. Animated Gif courtesy of Adam. When reviewing the cluster it shows that the cluster is pretty much balanced. When looking at the default view of the sum of Virtual Machine memory utilization (active memory). It shows that ESXi host ESXi02 is busier than the others. However since the active memory of each host is less than 20% and each virtual machine is receiving the memory they are entitled to, DRS will not move virtual machines around. Remember, DRS is designed to create as little overhead as possible. Moving one virtual machine to another host to make the active usage more balanced, is just a waste of compute cycles and network bandwidth. The virtual machines receive what they want to receive now, so why take the risk of moving VMs? But a different view of the current situation is when you toggle the graph to use consumed memory. Now we see a bigger difference in consumed memory utilization. Much more than 20% between ESXi02 and the other two hosts. By default DRS in vSphere 6.5 tries to clear a utilization difference of 20% between hosts. This is called Pair-Wise Balancing. However, since DRS is focused on Active memory usage, Pair-Wise Balancing won’t be activated with regards to the 20% difference in consumed memory utilization. After enabling the option “Memory Metric for Load Balancing” DRS rebalances the cluster with the optimal number of migrations (as few as possible) to reduce overhead and risk. Active versus Consumed Memory Bias If you design your cluster with no memory overcommitment as guiding principle, I recommend to test out the vSphere 6.5 DRS option “Memory Metric for Load Balancing”. You might want to switch DRS to manual mode, to verify the recommendations first.
EXPLAINER ON #SPECTRE & #MELTDOWN BY GRAHAM SUTHERLAND
Sometimes you stumble across a brilliant Twitter thread, so good, that it should never be lost. Graham Sutherland (@gsuberland) helped the world in understanding the Spectre and Meltdown bugs. I’m publishing his tweet thread in text form as this is just the best explanation of the bugs I’ve seen. Please note that VMware has released its response for Bounds-Check Bypass (CVE-2017-5753), Branch Target Injection (CVE-2017-5715) & Rogue Data Cache Load (CVE-2017-5754) - AKA Meltdown & Spectre.
FREE VSPHERE 6.5 HOST RESOURCES DEEP DIVE E-BOOK
In June of this year, Niels and I published the vSphere 6.5 Host Resources Deep Dive, and the community was buzzing. Twitter exploded, and many community members provided rave reviews. This excitement caught Rubriks attention, and they decided to support the community by giving away 2000 free copies of the printed version at VMworld. The interest was overwhelming, before the end of the second signing session in Barcelona we ran out of books. A lot of people reached out to Rubrik and us to find out if they could get a free book as well. This gave us an idea, and we sat down with Rubrik and the VMUG organization to determine how to cater the community. We are proud to announce that you can download the e-book version (PDF only) for free at rubrik.com. Just sign up and download your full e-book copy here. Spread the word! And if you like, thank @Rubrik and @myVMUG for their efforts to help the VMware community advance. https://www.youtube.com/watch?v=a4spq5B4wtg
WHAT IF THE VM MEMORY CONFIG EXCEEDS THE MEMORY CAPACITY OF THE PHYSICAL NUMA NODE?
This week I had the pleasure to talk to a customer about NUMA use-cases and a very interesting config came up. They have a VM with a particular memory configuration that exceeds the ESXi host NUMA node memory configuration. This scenario is covered in the vSphere 6.5 Host Resources Deep Dive, excerpt below. Memory Configuration The scenario described happens in multi-socket systems that are used to host monster-VMs. Extreme memory footprint VMs are getting more common by the day. The system is equipped with two CPU packages. Each CPU package contains twelve cores. The system has a memory configuration of 128 GB in total. The NUMA nodes are symmetrically configured and contain 64 GB of memory each. However, if the VM requires 96 GB of memory, a maximum of 64 GB can be obtained from a single NUMA node. This means that 32 GB of memory could become remote if the vCPUs of that VM can fit inside one NUMA node. In this case, the VM is configured with 8 vCPUs. The VM fits from a vCPU perspective inside one NUMA node, and therefore the NUMA scheduler configures for this VM a single virtual proximity domain (VPD) and a single a load-balancing group which is internally referred to as a physical proximity domain (PPD).
A VSPHERE FOCUSED GUIDE TO THE INTEL XEON SCALABLE FAMILY - MEMORY SUBSYSTEM
The Intel Xeon Scalable Family introduces a new platform (Purley). The most prominent change regarding system design is the memory subsystem. More Memory Bandwidth and Consistency in Speed The new memory subsystem supports the same number of DIMMs per CPU as the previous models. However, it’s wider and less deep. What I mean by that is that the last platform (Grantley) supported up to three DIMMs per channel (DPC) and made use of four channels. In total, the Grantley platform supported up to twelve DIMMs per CPU. Purley increases the number of channels from four to six but reduces the numbers of supported DIMMs per channel from three to two. Although this sounds like a potato, potato; tomato, tomato discussion it provides a significant increase in bandwidth while ensuring consistency in speed during a scaling up exercise. Let’s take a closer look. DIMMs per Memory Channel Depending on the DIMM slot configuration of the server board, multiple DIMMs are supported per channel. The E5-2600 V-series supports up to 3 DIMMs per channel (3 DPC). Using more DIMMs per channel provides the largest capacity, but unfortunately, it impacts the operational speed of memory. A DIMM groups memory chips into ranks. DIMMs come in three rank configurations; single-rank, dual-rank or quad-rank configuration, ranks are denoted as (xR). With the addition of each rank, the electrical load on the channel increases. And as more ranks are used in a memory channel, memory speed drops restricting the use of additional memory. Therefore in certain configurations, DIMMs will run slower than their listed maximum speeds. This reduction in speed occurs when 3 DIMMs per channel is used.
A VSPHERE FOCUSED GUIDE TO THE INTEL XEON SCALABLE FAMILY
Intel released the much-anticipated Skylake Server CPU this year. Moving away from the E5-2600-v moniker, Intel names the new iteration of its server CPU the Intel Xeon Scalable Family. On top of this it uses precious metal categories such as Platinum and Gold to identify different types and abilities. Upholding the tradition, the new Xeon family contains more cores than the previous Xeon version. The new top-of-the-line CPU offers 28 cores on a single processor die, memory speeds are now supported up to 2666 MHz. However, the biggest appeal for vSphere datacenters is the new “Purley” platform and its focus on increasing bandwidth between possibly every component possible. In this series, we are going to look at the new Intel Xeon Scalable family microarchitecture and which functions help to advance vSphere datacenters.
VMWARE CLOUD ON AWS TECHNICAL OVERVIEW
Please note that this information can be outdated due to the ongoing changes of this cloud service. Please consult the https://cloud.vmware.com/vmc-aws/roadmap for recent information about the latest release Yesterday we launched the VMware Cloud on AWS service. VMware Cloud on AWS allows you to run your applications across private, public, and hybrid cloud environments based on VMware vSphere, with optimized access to AWS services. The Cloud SDDC consists of vSphere, NSX and vSAN technology to provide you a familiar environment which can be managed an operated with your current tool and skill set. By leveraging bare-metal AWS infrastructure the Cloud SDDC can scale in an unprecedented way.