Binding iSCSI Port Names to VMware Software iSCSI Initiator – ESXi 4.1
Posted: October 3, 2011 Filed under: Networking, Storage, VMware | Tags: 4.1, bind, esxcli, iscsi, iscsi initiator, port groups, software iscsi initiator, vmware, vsphere Leave a commentFor my notes, I’m sharing what I’ve found searching the ‘net to bind VMkernel NICs to VMware’s built-in iSCSI software initiator in ESXi 4.1 I know ESXi 5.0 has changed this process to a nice GUI, but we’re stuck with the CLI in 4.1.
If you’re configuring jumbo frames as I’ve shown in a previous post, bind the VMkernel NICs after configuring jumbo frames.
Assuming you have two uplinks for iSCSI traffic, on the vSwitch of your iSCSI port group, set one uplink, temporarily, to unused. You’ll also want to note the vmhba# of the software iSCSI adapter. You can view this from the Configuration tab > Storage Adapters and viewing the iSCSI Software Adapter. You’ll also need to note the VMkernel NIC names of each iSCSI port group. You can view these from the Configuration tab > Networking and looking at the iSCSI port group. It will show the iSCSI port group name, the vmk#, IP address, and VLAN if you have one configured. Then from a CLI, either via the console or SSH, execute the following commands for each iSCSI port name:
Example: esxcli swiscsi nic add -n vmk# -d vmhba#
How to Configure Jumbo Frames for an iSCSI Port Group – ESXi 4.1
Posted: September 29, 2011 Filed under: Networking, VMware | Tags: 4.1, esxcfg, esxcfg-vmknic, esxcfg-vswitch, esxi, frames, iscsi, jumbo, jumbo frames, portgroup, vmkernel, vmknic Leave a commentIf you want to enable jumbo frames on an iSCSI port group in ESXi 4.1, you’ll need to make configuration changes at the vSwitch and VMkernel NIC level. Through trial and error, I found that I had to create the iSCSI port group from the command line instead of just enabling jumbo frames on an already existing port group. At first, I already had an iSCSI port group created. I did this via the vSphere Client. But then enabling jumbo frames from the CLI didn’t work. I had to delete the iSCSI port group first, then recreate from the CLI. Note that these commands are case sensitive, including the names of vSwitches, port groups, and VMkernel NICs. These command were completed via an SSH session directly to an ESXi 4.1 host.
How to Configure LUN Masking with Openfiler 2.99 and ESXi 4.1
Posted: September 11, 2011 Filed under: Storage, VMware | Tags: esxi, lun masking, openfiler, vmware 13 CommentsNote: If you’d like to see screenshots for this article, check out this other post.
I’ve been building a test environment to play with vSphere 4.1 before we begin our implementation. In order to experiment with the enterprise features of vSphere I needed shared storage between my ESXi hosts. As always, I turned to Openfiler. Now, I’ve deployed Openfiler before, but it was just one ESXi host and a single LUN. It was easy. There were plenty of good walkthroughs on how to set this up in such a way. But using the Google-izer, I couldn’t find a single page that explained how to configure Openfiler for shared storage between multiple hosts. When I finally got it working, I felt accomplished and decided to document the process for future reference. Maybe someone out there will find it useful, too.
File System Alignment in Virtual Environments
Posted: June 24, 2011 Filed under: NetApp, Storage, VMware, Windows | Tags: alignment, file system, file system alignment in virtual environments, storage, vmware 1 CommentIn speaking to my fellow Implementation Engineers and team leads, I’ve come to learn file system misalignment is a known issue in virtual environments and can cause performance issues for virtual machines. A little research has provided an overview of the storage layers in a virtualized environment, details on the proper alignment of guest file systems, and a description of the performance impact misalignment can have on the virtual infrastructure. NetApp has produced a white paper that speaks to file system alignment in virtual environments: TR 3747, which I’ve reproduced below.
In any server virtualization environment using shared storage, there are different layers of storage involved for the VMs to access storage. There are different ways shared storage can be presented for the hypervisor and also the different layers of storage involved.
VMware vSphere 4 has four ways of using shared storage for deploying virtual machines:




