SIMULATING NUMA NODES FOR NESTED ESXI VIRTUAL APPLIANCES
To troubleshoot a particular NUMA client behavior in a heterogeneous multi-cloud environment, I needed to set up an ESXi 7.0 environment. Currently, my lab is running ESXi 8.0, so I’ve turned to William Lams’ excellent repository of nested ESXi virtual appliances and downloaded a copy of the 7.0 u3k version. My physical ESXi hosts are equipped with Intel Xeon Gold 5218R CPUs, containing 20 cores per socket. The smallest ESXi host contains ten cores per socket in the environment I need to simulate. Therefore, I created a virtual ESXi host with 20 vCPUs and ensured that there were two virtual sockets (10 cores per socket)
SAPPHIRE RAPIDS MEMORY CONFIGURATION
The 4th generation of the Intel Xeon Scalable Processors (codenamed Sapphire Rapids) was released early this year, and I’ve been trying to wrap my head around what’s new, what’s good, and what’s challenging. Besides those new hardware native accelerators, which a later blog post covers, I noticed the return of different memory speeds when using multiple DIMMs per channel. Before the Scalable Processor Architecture in 2017, you faced the devil’s triangle when configuring memory capacity, cheap, high capacity, fast, pick two. The Xeons offered four memory channels per CPU package, and each memory channel could support up to three DIMMs. The memory speed decreased when equipped with three DIMMs per channel (3 DPC).
HOW TO CREATE A WINDOWS 11 BOOTABLE USB ON MAC OS MONTEREY
I need to install Windows 11 on a gaming PC, but I only have a MacBook in my house, as this is my primary machine for work. To make things worse, trying to do this on macOS Monterey is extra difficult due to the heightened security levels that withhold you from running unsigned software. I.e., most free tooling software. However, most tooling is provided by macOS itself. You have to remember the correct steps. And because this is not a process I often do, I decided to document it for easy retrieval, which might help others facing the same challenge.
UNEXPLORED TERRITORY EP 34 - WILLIAM LAM TALKS HOME LABS - CHRISTMAS SPECIAL
It’s the end of the year, and everybody is winding down from a hectic year, so we wanted to give you some light stuff to listen to in our last episode. But William had other plans. He is on fire in this episode, dropping one gem after another, sharing a decade-long of home lab wisdom. We asked William what his top 10 home lab gifts would be, and he got gift ideas from stocking stuffers to full-blown systems. Listen via Spotify (https://spoti.fi/3jdOmUp), Apple (https://apple.co/3WpMB50), or online (https://unexploredterritory.tech)
UNEXPLORED TERRITORY PODCAST 32 - IT GIVING MCLAREN RACING THE EDGE
Edward Green, Head of Commercial Technology at McLaren Racing, keynoted at the VMware Explore tech conference in Barcelona. I had the honor of sitting down with him for a few minutes to talk about the role of IT in F1. Most of us watch the races with multiple screens, the main TV for the race and additional screens to look at the various telemetry feeds. And so you know how much data flows between cars and the teams. But sitting down with Edward and hearing him explain how data transfer feeds and disk sizes impact real training time for the driver is just candy to the ears of every tech-savvy Formula 1 fan. The episode starts with a short interview with Joe Baguley, the racing CTO of VMware, discussing his involvement with the McLaren Racing partnership and his passion for racing!
ML SESSION AT CTEX VMWARE EXPLORE
Next week during VMware Explore, VMware is also organizing the Customer Technical Exchange. I’m presenting the session “vSphere Infrastructure for Machine Learning workloads”. I will discuss how vSphere act as a self-service platform for data science teams to easily and quickly deploy ML platforms with acceleration resources. I CTEX is happening at the Fira Barcelona Gran Via in room CC4 4.2. This is an NDA event. Therefore, you will need to register vi
VSPHERE 8 CPU TOPOLOGY FOR LARGE MEMORY FOOTPRINT VMS EXCEEDING NUMA BOUNDARIES
By default, vSphere manages the vCPU configuration and vNUMA topology automatically. vSphere attempts to keep the VM within a NUMA node until the vCPU count of that VM exceeds the number of physical cores inside a single CPU socket of that particular host. For example, my lab has dual-socket ESXi host configurations, and each host has 20 processor cores per socket. As a result, vSphere creates a VM with a vCPU topology with a unified memory address (UMA) up to the vCPU count of 20. Once I assign 21 vCPU, it creates a vNUMA topology with two virtual NUMA nodes and exposes this to the guest OS for further memory optimization.
UNEXPLORED TERRITORY PODCAST EP30 - PROJECT KESWICK WITH ALAN RENOUF
While preparing the podcast, I knew this episode would be good. Edge technology immensely excites me, and the way the project team strays away from the proverbial hammer and looks at ways to incorporate different principles like Gitops management concepts is inspiring. To top it off, you have Alan Renouf to talk about it, a long-time colleague and friend, but unfortunately, Covid prohibited me from partaking in this discussion. But, of course, Duncan and Johan had an excellent conversation with Alan. Please check it out on Spotify, Apple, or via our website. Enjoy!
VSPHERE 8 CPU TOPOLOGY DEVICE ASSIGNMENT
There seems to be some misunderstanding about the new vSphere 8 CPU Topology Device Assignment feature, and I hope this article will help you understand (when to use) this feature. This feature defines the mapping of the virtual PCIe device to the vNUMA topology. The main purpose is to optimize guest OS and application optimization. This setting does not impact NUMA affinity and scheduling of vCPU and memory locality at the physical resource layer. This is based on the VM placement policy (best effort). Let’s explore the settings and their effect on the virtual machine. Let’s go over the basics first. The feature is located in the VM Options menu of the virtual machine.
COULD NOT INITIALIZE PLUGIN ‘LIBNVIDIA-VGX.SO - CHECK SR-IOV IN THE BIOS
I was building a new lab with some NVIDIA A30 GPUs in a few hosts, and after installing the NVIDIA driver onto the ESXi host, I got the following error when powering up a VM with a vGPU profile: Typically that means three things: Shared Direct passthrough is not enabled on the GPU ECC memory is enabled VM Memory reservation was not set to protect its full memory range/ But shared direct passthrough was enabled, and because I was using a C-type profile and an NVIDIA A30 GPU, I did not have to disable ECC memory. According to the NVIDIA Virtual GPU software documentation: 3.4 Disabling and Enabling ECC Memory