Configuring Cisco Nexus 5020 and 2224 Fabric Extenders for Virtual Port Channeling (vPC)

So it’s been a long time since I’ve posted. We’ve finally finished our data center site surveys and we’re very close to starting the implementation phase. In preparation for implementation, we’ve begun testing configurations, playing with possibilities, and generally, seeing what the given hardware can do. We don’t exactly know what the architecture design team will give us to implement, but our pre-work will let us get a feel for the kinds of things we’ll be doing. For instance, we know we’ll be using virtual port channels and fabric extenders, so we’ve configured these on several of our Nexus switches. We’ll probably blow away the configs when we officially start anyways, but again, this gives us a chance to get our hands on the equipment and practice some of the same configs we’ll be using later.

The Design

Our test design will have two 5020s with 40 10GbE SFP ports plus a 6 port 10GbE SFP daughter card and two 2224 fabric extenders, FEXs, with 24 1GbEcopper ports. It will not be a cross connected FEX design because we will use servers with dual-homed 10GbE converged network adapters, CNAs, which we plan on cross connecting. Cisco does not support a design where the FEXs and the servers are both cross connected. Our test design looks the diagram below. Note that the diagram shows the server connected to each FEX via copper ports. We’ll actually be connecting each server via CNA twinax cables to the 10 Gb ports on the 5020s.

vPC design

The dotted line around the Nexus devices represents the virtual port channel, vPC, domain. When configured, the four Nexus devices will exist in the same vPC domain. Devices can only exist in a single vPC domain at one time. A vPC domain is similar in principle to a VTP domain in that devices in the same domain can share status information and configurations. The links between the 5020 devices will be configured as a regular port channel and will act as a vPC peer-link. A vPC peer-link is used to share vPC status information between the 5020s. The connections between each 5020 and their respective FEXs will be configured as vPCs. Although not configured in this exercise, the servers will connect to the 5020s (not as shown in the figure) using vPC links, as well.

Step 1: Configuring management interfaces and default gateways

The two 5020s will be labeled as 5020A and 5020B. The Nexus 5020 has a dedicated management interface which gets a management IP address. The default gateway will be configured for the management VRF. VRF stands for Virtual Routing and Forwarding. A VRF is distinct routing process with its own routing tables and protocols. By default, the 5020 has two such VRFs, the management VRF and the default VRF. The management interface resides within the management VRF. Traffic on the management interface needs to be explicitly told to use the management VRF or else it will try to use the default VRF and won’t get anywhere.

We configure the first switch:

5020A(config)# int mgmt 0

5020A(config-if)# ip address

5020A(config-if)# vrf context management

5020A(config-vrf)#ip route

And the second:

5020B(config)# int mgmt 0

5020B(config-if)# ip address

5020B(config-if)# vrf context management

5020B(config-vrf)#ip route

The vrf context management command tells the mgmt0 interface to use the management VRF. And the ip route command tells the mgmt0 interface to use the designated IP address when it doesn’t have a route in its routing table to a specified IP.

Step 2: Enable the vPC and LACP features

The Nexus product line was designed from the ground up to be a modular operating system. Before you can use many features, you must first enable them. Two features we need in this exercise are the vPC and LACP feature. Enabling them is easy. You’ll want to do this on both switches:

5020A(config)# feature vpc

5020A(config)# feature lacp

You can turn around and verify what features are enabled using the show feature command.

Step 3: Create a vPC domain

vPC domains are similar in principle to that of a VTP domain. Switches residing in the same VTP domain can share status and configuration information regarding VLANs. Similarly, Nexus switches must reside in the same vPC domain in order to share vPC information.Configure this on both switches. One command creates a vPC domain:

5020A(config)# vpc domain 5

Step 4: Configure switch priorities for vPC management

Similar to spanning tree root bridges, Nexus’ configured for vPC can be manually configured to be the master in their vPC domain. The vPC master will keep all its vPC member ports active while the secondary vPC switch will suspend its vPC ports until it stops receiving keep-alive heartbeats via the peer keep-alive link. Best practice is to make the spanning tree root bridge the master vPC switch, as well. You can do this with the following command:

5020A(config-vpc-domain)# role priority 1000

The default priority is 32667. The lowest priority wins the election. The range of role priorities is 1 – 65636.

Step 5: Configure vPC peer keep-alive link

The vPC keep-alive link shares a heartbeat between the two 5020 switches in the vPC domain. It must be a routed link and Cisco suggests using the management interface. The management interfaces in our environment connect to a management network. Essentially, the connection looks like this:

vPC management network

The configuration of the first switch looks like this:

5020A(config-vpc-domain)# peer-keepalive destination


——–:: Management VRF will be used as the default VRF ::——–

And the second switch points to the first:

5020B(config-vpc-domain)# peer-keepalive destination


——–:: Management VRF will be used as the default VRF ::——–

As a note, many of these commands are configured while in vPC domain mode annotated with <switch_name>(config-vpc-domain)#. To enter this mode, input vpc domainfollowed by your vPC domain number. In my case:

vpc domain 5

Step 6: Configure vPC peer link

The vPC peer link allows traffic necessary for vPC functions to flow between the vPC peers. This is normally the same trunk link used between switches in the first place. For the Nexus 5020s, you must use 10 Gbps links configured as a port-channel. You can use more than two links, but in my case I’m using two. From start to finish, the configuration looks like this:

5020A(config)#int eth 1/37 – 38

5020A(config-if-range)#channel-group 10 mode active


5020A(config)#int po10

5020A(config-if)#vpc peer-link

5020A(config-if)#switchport mode trunk

Configure this on both switches.

Step 7: Configure Nexus 2224 Fabric Extenders and fabric interface

Now we can connect the fabric extenders. There’s a short process to this but once you’ve done it a few times it will make sense. I’ll show you the commands then explain them one by one.

5020A(config)#feature fex

5020A(config)#fex 100

5020A(config-fex)#pinning max-links 1

Change in Max-links will cause traffic disruption.

5020A(config-fex)# int eth 1/20 – 21

5020A(config-if-range)# channel-group 50

5020A(config-if-range)# int po50

5020A(config-if)# switchport mode fex-fabric

5020A(config-if)# fex associate 100

Feature fex enables the fabric extender feature. Fex 100 tells the 5020 that you will be connecting a FEX to it with the label 100. 100 will also be the FEX chassis number when viewing all the ports on the 5020. For instance, after you’re finished connecting the FEX and issuing a show int status command, you’ll see the native 5020 ports, such as eth1/10, eth1/11, for main chassis 5020 ports, eth3/1, eth3/2, for 5020 daughter card ports, and the FEXs will show up as eth100/1/10, eth100/1/11, etc. if you’ve specified fex 100 as the label.

We used pinning max-links 1 here because we plan on port-channeling the links between the FEX and 5020. For a better description of this command and of the Nexus, see Network Warrior 2nd Edition, by Gary Donahue. Then you can create the port-channel uplink from the FEX to the 5020. The switchport mode fex-fabric is similar to the other switchport mode commands – telling the 5020 to expect fabric traffic on these particular ports. Finally, we associate our FEX, labeled as 100, to the port-channel with the fex associate 100 command.

The commands above configure the 5020 ports to accept a FEX. Now we configure the FEX ports to connect to the 5020.

5020A(config)# int eth 100/1/20 – 21

5020A(config-if-range)# channel-group 10

5020A(config-if)# int po10

5020A(config-if)# vpc 10

You can include a switchport access vlan command on the port-channel to restrict traffic to a particular VLAN, if you wish.

Some good documentation for configuring vPCs, FEXs, and the Nexus product line are Network Warrior 2nd Edition, by Gary Donahue and online Cisco documentation.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s