At a customer’s site towards the end of a deployment, we decided to see what this new fan-dangled vSphere Compliance Checker could do. We ran it against an ESXi 4.1U1 host and it spit out some nice colors and information. The quick and dirty of it is that it was easy to install, easy to use, and provided useful information. So I decided to run it against my ESXi 5 hosts in a test lab and write up a quickie post.
This post is the third in a series dedicated to helping you set up your update infrastructure in vSphere 4.1. Part I is about installing and configuring Update Manager. Part II shows you how to install and configure the Update Manager Download Service. As the last in this series, this post will explain how to patch your ESX/ESXi hosts.
This walkthrough is part II of a series of guides on installing and configuring VMware’s tools for updating and patching ESXi 4.1 hosts. You can find Part I here and Part III here. The Update Manager Download Service (UMDS) is used in an air-gap environment where the vCenter Update Manager server (VUM) does not have access to the Internet to download patches itself – instead it relies on UMDS to download the patches. Once patches are downloaded, they’re manually copied via removable media, usually a CD/DVD, to VUM. Once VUM has the patches, it then works through the vSphere Client and the Update Manager plug-in to update the hosts. Although VUM can download operating system patches for Windows and metadata for Linux patches, we’re not using this configuration in this guide. We’re assuming the environments are updated via WSUS or SCCM.
So during my first data center virtualization project, I had to write up a series of documents for internal reference. These documents were to help us perform a standard installation at each site we migrate. I thought it would be helpful to post them for anyone else looking to perform these tasks, as well. This series of posts is about VMware Update Manager 4.1U1 and its associated Update Manager Download Service. It appears in three posts because the topic can be logically separated into three steps: installing and configuring VUM, installing and configuring UMDS, and a patching guide once your initial update infrastructure is in place. This post, as you can see, is part I. Let me know if it helps you out or if I missed something. All the best!
At a minimum, you’ll want to perform regular backups of your vCenter, Update Manager, and System databases. You don’t have to be a DBA to perform simple backups. You don’t need to know T-SQL or database programming to perform these steps. There’s an easy wizard that walks you through a standard Windows Next-Next-Finish set up.
There are a couple things to note in the walkthrough below. We’re using SQL Server 2008 Enterprise Edition 64-bit on a 64-bit Windows Server 2008 SP2 Enterprise Edition. The SQL server is also a virtual machine in a vSphere 4.1 environment.
For my notes, I’m sharing what I’ve found searching the ‘net to bind VMkernel NICs to VMware’s built-in iSCSI software initiator in ESXi 4.1 I know ESXi 5.0 has changed this process to a nice GUI, but we’re stuck with the CLI in 4.1.
If you’re configuring jumbo frames as I’ve shown in a previous post, bind the VMkernel NICs after configuring jumbo frames.
Assuming you have two uplinks for iSCSI traffic, on the vSwitch of your iSCSI port group, set one uplink, temporarily, to unused. You’ll also want to note the vmhba# of the software iSCSI adapter. You can view this from the Configuration tab > Storage Adapters and viewing the iSCSI Software Adapter. You’ll also need to note the VMkernel NIC names of each iSCSI port group. You can view these from the Configuration tab > Networking and looking at the iSCSI port group. It will show the iSCSI port group name, the vmk#, IP address, and VLAN if you have one configured. Then from a CLI, either via the console or SSH, execute the following commands for each iSCSI port name:
Example: esxcli swiscsi nic add -n vmk# -d vmhba#
Note: If you’d like to see screenshots for this article, check out this other post.
I’ve been building a test environment to play with vSphere 4.1 before we begin our implementation. In order to experiment with the enterprise features of vSphere I needed shared storage between my ESXi hosts. As always, I turned to Openfiler. Now, I’ve deployed Openfiler before, but it was just one ESXi host and a single LUN. It was easy. There were plenty of good walkthroughs on how to set this up in such a way. But using the Google-izer, I couldn’t find a single page that explained how to configure Openfiler for shared storage between multiple hosts. When I finally got it working, I felt accomplished and decided to document the process for future reference. Maybe someone out there will find it useful, too.