How to Configure Jumbo Frames for an iSCSI Port Group – ESXi 4.1Posted: September 29, 2011 Filed under: Networking, VMware | Tags: 4.1, esxcfg, esxcfg-vmknic, esxcfg-vswitch, esxi, frames, iscsi, jumbo, jumbo frames, portgroup, vmkernel, vmknic Leave a comment
If you want to enable jumbo frames on an iSCSI port group in ESXi 4.1, you’ll need to make configuration changes at the vSwitch and VMkernel NIC level. Through trial and error, I found that I had to create the iSCSI port group from the command line instead of just enabling jumbo frames on an already existing port group. At first, I already had an iSCSI port group created. I did this via the vSphere Client. But then enabling jumbo frames from the CLI didn’t work. I had to delete the iSCSI port group first, then recreate from the CLI. Note that these commands are case sensitive, including the names of vSwitches, port groups, and VMkernel NICs. These command were completed via an SSH session directly to an ESXi 4.1 host.
Configuring Cisco Nexus 5020 and 2224 Fabric Extenders for Virtual Port Channeling (vPC)Posted: September 26, 2011 Filed under: Cisco Nexus, Networking | Tags: 2000 series, 2224, 5000 series, 5020, Cisco, Nexus, port channeling, virtual, virtual port channeling, vPC Leave a comment
So it’s been a long time since I’ve posted. We’ve finally finished our data center site surveys and we’re very close to starting the implementation phase. In preparation for implementation, we’ve begun testing configurations, playing with possibilities, and generally, seeing what the given hardware can do. We don’t exactly know what the architecture design team will give us to implement, but our pre-work will let us get a feel for the kinds of things we’ll be doing. For instance, we know we’ll be using virtual port channels and fabric extenders, so we’ve configured these on several of our Nexus switches. We’ll probably blow away the configs when we officially start anyways, but again, this gives us a chance to get our hands on the equipment and practice some of the same configs we’ll be using later.
Our test design will have two 5020s with 40 10GbE SFP ports plus a 6 port 10GbE SFP daughter card and two 2224 fabric extenders, FEXs, with 24 1GbEcopper ports. It will not be a cross connected FEX design because we will use servers with dual-homed 10GbE converged network adapters, CNAs, which we plan on cross connecting. Cisco does not support a design where the FEXs and the servers are both cross connected. Our test design looks the diagram below. Note that the diagram shows the server connected to each FEX via copper ports. We’ll actually be connecting each server via CNA twinax cables to the 10 Gb ports on the 5020s.
How to Configure LUN Masking with Openfiler 2.99 and ESXi 4.1Posted: September 11, 2011 Filed under: Storage, VMware | Tags: esxi, lun masking, openfiler, vmware 13 Comments
Note: If you’d like to see screenshots for this article, check out this other post.
I’ve been building a test environment to play with vSphere 4.1 before we begin our implementation. In order to experiment with the enterprise features of vSphere I needed shared storage between my ESXi hosts. As always, I turned to Openfiler. Now, I’ve deployed Openfiler before, but it was just one ESXi host and a single LUN. It was easy. There were plenty of good walkthroughs on how to set this up in such a way. But using the Google-izer, I couldn’t find a single page that explained how to configure Openfiler for shared storage between multiple hosts. When I finally got it working, I felt accomplished and decided to document the process for future reference. Maybe someone out there will find it useful, too.