24 October 2014 Edit: fixed typos when setting IP addresses and descriptions on Vyatta interfaces (from eth0 to the proper interfaces)
Good day my Internet friends! Let me say that I feel accomplished – and not just because I got out of bed this morning, although that is a big win for me. No, instead I’ve actually been quite productive (based on my standards, anyways). I’ve been rebuilding my home test lab for the past couple of days before I start my new job. What I really wanted was to get back heavy into the NetApp DataONTAP simulator, but I wanted to get inter-VLAN routing working first so I could have some realistic networking. I only have layer 2 switches in the lab so I was looking for ways to accomplish this. I reckon I knew I would have to use software of some sort, but I hadn’t actually messed with anything up to this point. I’d heard of the Vyatta virtual router recently, so I thought I’d give that a try. You can download the free community edition here with a login: http://www.vyatta.org/downloads I wasn’t able to find the Vyatta virtual appliance I saw advertised around the interwebs, but I was able to install from the LiveCD ISO just fine.
I was troubleshooting a production issue a couple days ago that led me to request the switchport configs from our Networking team of our ESXi 5.0 hosts that pass virtual machine traffic. Here’s a snippet of what they came back with for two particular ports:
description -=R910 ESX# 1 – Front Side=-
switchport mode trunk
description -=R910 ESX# 1 – Front Side=-
Well. Not only do I see our problem (no config *at all* on one port!), but I see something else that troubles me. Our ESXi host-facing ports are only configured as trunk ports. Absolutely* nothing* else. Well, this just won’t do.
So my team and I got a call to swing by a customer’s site on our way to another job. They told us half the ports went bad on a FEX and we were to install the replacement that just arrived onsite. In this post, I’ll explain how to replace the FEX (which is trivial) and more importantly how to verify that it’s working after installation.
During another virtualization implementation at a customer’s site, I had the opportunity to upgrade Nexus 5020 switches. We upgraded from 5.0(2)N2(1) to 5.0(3)N2(1). The process was surprisingly simple. The steps include
1. Setting up an TFTP server
2. Uploading both the NX-OS binary and the kickstart binary
3. Installing the binaries
For my notes, I’m sharing what I’ve found searching the ‘net to bind VMkernel NICs to VMware’s built-in iSCSI software initiator in ESXi 4.1 I know ESXi 5.0 has changed this process to a nice GUI, but we’re stuck with the CLI in 4.1.
If you’re configuring jumbo frames as I’ve shown in a previous post, bind the VMkernel NICs after configuring jumbo frames.
Assuming you have two uplinks for iSCSI traffic, on the vSwitch of your iSCSI port group, set one uplink, temporarily, to unused. You’ll also want to note the vmhba# of the software iSCSI adapter. You can view this from the Configuration tab > Storage Adapters and viewing the iSCSI Software Adapter. You’ll also need to note the VMkernel NIC names of each iSCSI port group. You can view these from the Configuration tab > Networking and looking at the iSCSI port group. It will show the iSCSI port group name, the vmk#, IP address, and VLAN if you have one configured. Then from a CLI, either via the console or SSH, execute the following commands for each iSCSI port name:
Example: esxcli swiscsi nic add -n vmk# -d vmhba#
If you want to enable jumbo frames on an iSCSI port group in ESXi 4.1, you’ll need to make configuration changes at the vSwitch and VMkernel NIC level. Through trial and error, I found that I had to create the iSCSI port group from the command line instead of just enabling jumbo frames on an already existing port group. At first, I already had an iSCSI port group created. I did this via the vSphere Client. But then enabling jumbo frames from the CLI didn’t work. I had to delete the iSCSI port group first, then recreate from the CLI. Note that these commands are case sensitive, including the names of vSwitches, port groups, and VMkernel NICs. These command were completed via an SSH session directly to an ESXi 4.1 host.
So it’s been a long time since I’ve posted. We’ve finally finished our data center site surveys and we’re very close to starting the implementation phase. In preparation for implementation, we’ve begun testing configurations, playing with possibilities, and generally, seeing what the given hardware can do. We don’t exactly know what the architecture design team will give us to implement, but our pre-work will let us get a feel for the kinds of things we’ll be doing. For instance, we know we’ll be using virtual port channels and fabric extenders, so we’ve configured these on several of our Nexus switches. We’ll probably blow away the configs when we officially start anyways, but again, this gives us a chance to get our hands on the equipment and practice some of the same configs we’ll be using later.
Our test design will have two 5020s with 40 10GbE SFP ports plus a 6 port 10GbE SFP daughter card and two 2224 fabric extenders, FEXs, with 24 1GbEcopper ports. It will not be a cross connected FEX design because we will use servers with dual-homed 10GbE converged network adapters, CNAs, which we plan on cross connecting. Cisco does not support a design where the FEXs and the servers are both cross connected. Our test design looks the diagram below. Note that the diagram shows the server connected to each FEX via copper ports. We’ll actually be connecting each server via CNA twinax cables to the 10 Gb ports on the 5020s.