RETRIEVAL-AUGMENTED GENERATION BASICS FOR THE DATA CENTER ADMIN

Thanks to ChatGPT, Large Language Models (LLMs) have caught the attention of people everywhere. When built into products and services, LLMs can make most interactions with systems much faster. Current LLM-enabled apps mostly use open-source LLMs such as Llama 2, Mistral, Vicuna, and sometimes even Falcon 170B. These models are trained on publicly available data, allowing them to react appropriately to most prompts. Yet organizations often want LLMs to respond using domain-specific or private data.

GEN AI SESSIONS AT EXPLORE BARCELONA 2023

I’m looking forward to next week’s VMware Explore conference in Barcelona. It’s going to be a busy week. Hopefully, I will meet many old friends, make new friends, and talk about Gen AI all week. I’m presenting a few sessions, listed below, and meeting with customers to talk about the VMware Private AI foundation. If you are interested and you see me, walk by, come, and have a talk with me.

BASIC TERMINOLOGIES LARGE LANGUAGE MODELS

Many organizations are in the process of deploying large language models to apply to their use cases. Publically available Large Language Models (LLMs), such as ChatGPT, are trained on publicly available data through September 2021. However, they are unaware of proprietary private data. Such information is critical to the majority of enterprise processes. To help an LLM to become a useful tool in the enterprise space, an LLM is further trained of finetuned on proprietary data to adapt to organization-specific concepts.

MY SESSIONS AT VMWARE EXPLORE 2023 LAS VEGAS

Next week we are back in Las Vegas. Busy times ahead with meeting customers, old friends, making new friends, and presenting a few sessions. Next week I will present at Customer Technical Exchange (CTEX), {code}, and host two meet-the-expert sessions. I will also participate as a part-time judge at the {code} hackathon. Breakout Sessions 45 Minutes of NUMA (A CPU is not a CPU Anymore [CODEB2761LV] Tuesday, Aug 22, 2:45 PM - 3:30 PM PDT Level 4, Delfino 4003

VSPHERE ML ACCELERATOR SPECTRUM DEEP DIVE – INSTALLING THE NVAIE VGPU DRIVER

After setting up the Cloud License Service Instance, the NVIDIA AI Enterprise vGPU driver must be installed on the ESXi host. A single version driver amongst all the ESXi hosts in the cluster containing NVIDIA GPU devices is recommended. The most common error during the GPU install process is using the wrong driver. And it’s an easy mistake to make. In vGPU version 13 (the current NVAIE version is 15.2), NVIDIA split its ESXi host vGPU driver into two kinds. A standard vGPU driver component supports graphics, and an AI Enterprise (AIE) vGPU component supports compute. The Ampere generation devices, such as the A30 and A100 device, support compute only, so it requires an AIE vGPU component. There are AIE components available for all NVIDIA drivers since vGPU 13.

VSPHERE ML ACCELERATOR SPECTRUM DEEP DIVE – NVAIE CLOUD LICENSE SERVICE SETUP

Next in this series is installing the NVAIE GPU operator on a TKGs guest cluster. However, we must satisfy a few requirements before we can get to that step. NVIDIA NVAIE Licence activated Access to NVIDIA NGC NVIDIA Enterprise Catalog and Licensing Portal The license Server Instance activated NVIDIA vGPU Manager installed on ESXi Host with NVIDIA GPU Installed VM Class with GPU specification configured Ubuntu image available in the content library for TKGs Worker Node

VSPHERE ML ACCELERATOR SPECTRUM DEEP DIVE – USING DYNAMIC DIRECTPATH IO (PASSTHROUGH) WITH VMS

vSphere 7 and 8 offer two passthrough options, DirectPath IO and Dynamic DirectPath IO. Dynamic DirectPath IO is the vSphere brand name of the passthrough functionality of PCI devices to virtual machines. It allows assigning a dedicated GPU to a VM with the lowest overhead possible. DirectPath I/O assigns a PCI Passthrough device by identifying a specific physical device located on a specific ESXi host at a specific bus location on that ESXi host using the Segment/Bus/Device/Function format. This configuration path restricts that VM to that specific ESXi host.

#47 - HOW VMWARE ACCELERATES CUSTOMERS ACHIEVING THEIR NET ZERO CARBON EMISSIONS GOAL

In episode 047, we spoke with Varghese Philipose about VMware’s sustainability efforts and how they help our customers meet their sustainability goals. Features like the green score help many of our customers understand how they can lower their carbon emissions and hopefully reach net zero. Topics discussed: Creating sustainability dashboards - https://blogs.vmware.com/management/2019/06/sustainability-dashboards-in-vrealize-operations-find-how-much-did-you-contribute-to-a-greener-planet.html Sustainability dashboards in VROps 8.6 - https://blogs.vmware.com/management/2021/10/sustainability-dashboards-in-vrealize-operations-8-6.html VMware Green Score - https://blogs.vmware.com/management/2022/11/vmware-green-score-in-aria-operations-formerly-vrealize-operations.html Intrinsically green - https://news.vmware.com/esg/intrinsically-evergreen-vmware-earth-day-2023 Customer success story - https://blogs.vmware.com/customer-experience-and-success/2023/04/tam-partnerships-make-customers-the-hero.html Follow the podcast on Twitter for updates and news about upcoming episodes: https://twitter.com/UnexploredPod.

VSPHERE ML ACCELERATOR SPECTRUM DEEP DIVE – ESXI HOST BIOS, VM, AND VCENTER SETTINGS

To deploy a virtual machine with a vGPU, whether a TKG worker node or a regular VM, you must enable some ESXi host-level and VM-level settings. All these settings are related to the isolation of GPU resources and memory-mapped I/O (MMIO) and the ability of the (v)CPU to engage with the GPU using native CPU instructions. MMIO provides the most consistent high performance possible. By default, vSphere assigns a MMIO region (an address range, not actual memory pages) of 32GB to each VM. However, modern GPUs are ever more demanding and introduce new technologies requiring the ESXi Host, VM, and GPU settings to be in sync. This article shows why you need to configure these settings, but let’s start with an overview of the required settings.

VSPHERE ML ACCELERATOR SPECTRUM DEEP DIVE –NVIDIA AI ENTERPRISE SUITE

vSphere allows assigning GPU devices to a VM using VMware’s (Dynamic) Direct Path I/O technology (Passthru) or NVIDIA’s vGPU technology. The NVIDIA vGPU technology is a core part of the NVIDIA AI Enterprise suite (NVAIE). NVAIE is more than just the vGPU driver. It’s a complete technology stack that allows data scientists to run an end-to-end workflow on certified accelerated infrastructure. Let’s look at what NVAIE offers and how it works under the cover.