VIRTUALLY SPEAKING PODCAST ABOUT TECHNICAL WRITING

Last week Duncan and I were guests on the ever popular Virtually Speaking Podcast. In this show we discussed the difference in technical writing, i.e., writing a blog post versus writing a book. We spoke a lot about the challenges of writing a book and the importance of a supporting cast. We received a lot of great feedback on social media, and Pete told me the episode was downloaded more than a 1000 times in the first 24 hours. I think this is especially impressive as he published the podcast on a Saturday Afternoon. Due to this popularity, I thought it might be cool to share the episode in case you missed the announcement. Pete and John shared the links to our VMworld sessions on this page. During the show, I mentioned the VMworld session of Katarina Wagnerova and Mark Brookfield. If you go to VMworld, I would recommend attending this session. It’s always interesting to hear people talk about how they designed an environment and dealt with problems in a very isolated place on earth. Enjoy listening to the show.

RESOURCE CONSUMPTION OF ENCRYPTED VMOTION

vSphere 6.5 introduced encrypted vMotion and encrypts vMotion traffic if the destination and source host are capable of supporting encrypted vMotion. If true, vMotion traffic consumes more CPU cycles on both the source and destination host. This article zooms in on the impact of CPU consumption of encrypted vMotion on the vSphere cluster and how DRS leverages this new(ish) technology. CPU Consumption of vMotion Process ESXi reserves CPU resources on both the destination and the source host to ensure vMotion can consume the available bandwidth. ESXi only takes the number of vMotion NICs, and their respective speed into account, the number of vMotion operations does not affect the total of CPU resources reserved! 10% of a CPU core for a 1 GbE NIC, 100% of a CPU core for a 10 GbE NIC. vMotion is configured with a minimum reservation of 30%. Therefore, if you have 1 GbE NIC configured for vMotion, it reserves at least 30% of a single core.

NEW FLING: DRS ENTITLEMENT

I’m proud to announce the latest fling; DRS entitlement. This fling is built by the performance team and it provides insight to the demand and entitlement of the virtual machines and resource pools within a vSphere cluster. By default, it shows the active CPU and memory consumption, which by itself helps to understand the dynamics within the cluster. Especially when you are using resource pools with different levels of share values. In this example, I have two resource pools, one containing the high-value workloads for the organization, and one resource pool containing virtual machines that are used for test and dev operations. The high-value workloads should receive the resources they require all the time. The What-If functionality allows you to simulate a few different scenarios. A 100% demand option and a simulation of resource allocation settings. The screenshot below shows the what-if entitlement. What if these workloads generate 100% of activity, what resources do these workloads require if they go to the max? This allows you to set the appropriate resource allocations settings such as reservations and limits on the resource pools or maybe even on particular virtual machines. Another option is to specify particular Reservation, Limits, and Shares (RLS) settings to an object. Select the RLS option and select the object you want to use in the simulation. In this example, I selected the Low Value Workload resource pool and changed the share value setting of the resource pool. You can verify the new setting before running the analysis. Please note, that this is an analysis, it does not affect the resource allocation of active workload whatsoever. You can simulate different settings and understand the outcome. Once the correct setting is determined you can apply the setting on the object manually, or you can use the PowerCLI setting and export the PowerCLI one-liner to programmatically change the RLS settings. Follow the instruction on the flings website to install it on your vCenter. I would like to thank Sai Inabattini and Adarsh Jagadeeshwaran for creating this fling and for listening to my input! RUN DRS!

STRETCHED CLUSTERS ON VMWARE CLOUD ON AWS, A REALLY BIG THING

This week Emad published an excellent article about the stretched cluster functionality of VMware Cloud on AWS. To sum up, you can now deploy a single vSphere cluster across two AWS availability zones. A trip to Memory Lane I think the ability to stretch a vSphere cluster across two availability zones is a really big thing. Go back to the days where we had to refactor the application to make it highly available. To reduce application downtime, you typically used clustering software such as Microsoft cluster or Veritas clustering services. But not all applications were fit for this solution. When we introduced VMware High Availability back in 2006, we brought a big change to the industry. From that point on you could provide crash-consistent failover ability to all your workloads. No need to refactor any application, no need to build outlandish hardware solutions. Just enable a few tickboxes at the infrastructure layer, and every workload running inside a VM is protected. And to this day, HA remains the most popular functionality of vSphere. Amazon Web Services Resiliency Strategy Amazon urges you to design your application to be resilient to infrastructure outages. Amazon AWS is hosted in multiple locations worldwide. These locations are composed of regions and Availability Zones. Each region is a separate geographic area that has multiple, isolated locations known as Availability Zones. AWS provides the ability to place instances and data in multiple locations. And you can take advantage of the safety and reliability of geographic redundancy by spanning your Auto Scaling group across multiple Availability Zones within a region and then attach a load balancer to distribute incoming traffic across those Availability Zones. Incoming traffic is distributed equally across all Availability Zones enabled for your load balancer. And this works very well if you are refactoring your application or if you are building a complete new cloud-native stack. The challenge we face today is that not all applications lend to getting refactored, or some applications do not require the journey from monolithic to full-FAAS. Hybrid-Cloud Experience With stretched clusters in VMware Cloud on AWS, we introduce the same ease of infrastructure resiliency to workloads that run on AWS infrastructure. Merely expand you vSphere cluster to 6 hosts and select multi-az deployment. After that, the workload in the Cloud SDDC is protected for AZ outages. If something happens, HA detects the failed VMs and restarts them on different physical servers in the remaining AZ without manual human involvement. The ability to stretch your vSphere cluster across AZs allows you to easily provide resiliency to your workload within the AWS infrastructure without the Herculean effort of refactoring all your applications.

DYING HOME LAB - FEEDBACK WELCOME

The servers in my home lab are dying on a daily basis. After four years of active duty, I think they have the right to retire. So I need something else. But what? I can’t rent lab space as I work with unreleased ESXi code. I’ve been waiting for the Intel Xeon D 21xx Supermicro systems, but I have the feeling that Elon will reach Mars before we see these systems widely available. The system that I have in mind is the following:

DEDICATED HARDWARE IN A PUBLIC CLOUD WORLD

One of the more persistent misconceptions is that the components of VMware’s Software Defined Data Center (SDDC) on VMware Cloud on AWS are virtualized or that the deployed VMs run natively on Amazon. And to be honest, it’s not even weird that most people think this way. After all, Amazon Web Services launched in March 2006, 12 years ago. AWS and Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3) are synonymous with each other. All of a sudden, you can know “run vSphere on AWS”.

VBROWNBAG TECHTALKS VMWORLD CALL FOR PAPERS NOW OPEN

Although the selection process of the submitted VMworld 2018 sessions is still ongoing, vBrownbag announced their call for papers. As Duncan mentioned in his Call for paper article ‘Good luck, and remember: if you don’t end up getting selected, submit the proposal to a VMUG near you instead. They are always begging for community sessions.’ Think about signing up for the vBrownbag as well. Since last year all the vBrownbag sessions are published in the content catalog. Thus your session is visible for all 23.000+ attendees. Go right ahead and fill out this form.

THE PUBLIC SHAMING OF RESOURCE POOL-AS-A-FOLDER USER

Yesterday there was some public shaming done of Antony Spiteri. He was outed that he was using vSphere resource pool as folders. https://twitter.com/davidhill_co/status/988797652346245126 A funny thread and he truly deserved all the public shaming by the community members ;). All fun aside, using resource pools as folders are not recommended by VMware. As I described in the new vSphere 6.5 DRS white paper available at vSphere central: Correct use: Resource pools are an excellent construct to isolate a particular amount of resources for a group of virtual machines without having to micro-manage resource setting for each individual virtual machine. A reservation set at the resource pool level guarantees each virtual machine inside the resource pool access to these resources. Depending on the activity of these virtual machines these virtual machines can operate without any contention. Incorrect use: Resource pools should not be used as a form of folders within the inventory view of the cluster. Resource pools consume resources from the cluster and distribute these amongst its child objects within the resource pool; this can be additional resource pools and virtual machines. Due to the isolation of resources, using resource pools as folders in a heavily utilized vSphere cluster can lead to an unintended level of performance degradation for some virtual machines inside or outside the resource pool. Understanding this behavior allows you to design a correct resource pool structure. Currently, I’m working on a new vSphere DRS Resource Pool white paper which sheds some new light on the distribution of resources under normal conditions and under load (the Resource Pool Pie Paradox). I will keep you posted!

PUBLIC SPEAKING SCHEDULE

The VMUG season has started, and I have a few speaking sessions at various events. I thought it might be convenient to list the events and topics: Date: February, 22 Organization: North East UK VMUG Location: Newcastle Topic: VMware Cloud on AWS from a resource management perspective Date: March, 7 Organization: Swiss-French VMUG Location: Lausanne Switzerland Topic: VMware Cloud on AWS from a resource management perspective Date: March, 8 Organization: Swiss-German VMUG Location: Zurich Switzerland Topic: VMware Cloud on AWS from a resource management perspective Date: March, 20 Organization: Dutch VMUG Location: Den Bosch Netherlands Topic: vSphere Resource Kit Double-Hour Session 1: vSphere 6.5 Host Resource Deep Dive with Niels Hagoort Session 2: vSphere 6.5 Clustering Deep Dive with Duncan Epping Date: March 29, Organization: Virtual VMUG Location: Online Topic: VMware Cloud on AWS from a resource management perspective Date: April 10, Organization: Turkey VMUG Location: Istanbul, Turkey Topic: VMware Cloud on AWS from a resource management perspective Date: May 24 Organization: Czech Republic VMUG Location: Prague Topic: vSphere 6.5 Host Resource Deep Dive with Niels Hagoort Hope to see you there

VIRTUALLY SPEAKING PODCAST #67 RESOURCE MANAGEMENT

Two weeks ago Pete Flecha (a.k.a. Pedro Arrow) and John Nicholson invited me to their always awesome podcast to talk about resource management. During our conversation, we covered both on-prem and the features of VMware Cloud on AWS that help cater the needs of your workload. Being a guest on this podcast is an honour and times flies talking to these two guys. Hope you enjoy it as much as I did.