Real World Cisco UCS Adapter Placement

I had the opportunity recently to deploy 18 B420 M3 blades across two sites.  Having only deployed half width blades over the last two years, I had to change my usual Service Profile configuration for ESXi hosts to ensure the vNICs and vHBAs were properly spread across the two VICs installed in each blade.  Each B420 had a VIC 1240 and a VIC1280.  The Service Profile for the blades includes six vNICs and two vHBAs.  The six vNICs were used to take advantage of QoS policies at the UCS-level.  The six vNICs configured included:

  • vNIC FabA-mgmt
  • vNIC FabA-vMotion
  • vNIC FabA-VM-Traffic
  • vNIC FabB-mgmt
  • vNIC FabB-vMotion
  • vNIC FabB-VM-Traffic


So the goal of the Adapter Placement Policy, in this case, would be to ensure that if a VIC failed, it would not cause a total traffic disruption for any particular type of traffic.  For instance, if the VIC 1280 failed, I would not want both VM Traffic vNICs to be mapped onto the 1280 and therefore cause all VM traffic to cease because of the failure of one piece of hardware.  Instead, I need to make sure that only one vNIC of a particular traffic type is mapped to any single VIC.

The Problem

Before creating the Adapter Placement Policy, the default Adapter Placement Policies ended up putting both ESXi management vNICs on Adapter 1 (the VIC 1240) and both vMotion vNICs on Adapter 3 (the VIC 1280).  This is what I’m trying to avoid.  By chance, the VM Traffic vNICs were spread across both Adapters “correctly,” but that could have happened to any vNIC.

The Fabric A ESXi management vNIC is on Adapter 1.


And the Fabric B ESXi management vNIC is also on Adapter 1.


The Fix

So how do we fix this?  The same way that we would with a physical server.  Ensure traffic types are spread across different PCIe slots or channels.  With the UCS, we use Adapter Placement Policies to accomplish this.

Create vNIC/vHBA Placement Policies

First I created a vNIC/vHBA Placement Policy.


The Virtual Slots will align or map to where your physical adapters are installed.  The VIC 1240 is always installed in Adapter Slot 1.  Check out the Configuring Server-Related Policies section of your relevant UCS version configuration guide for details on this.


The B420 M3 has three adapter slots, though.  While the 1240 can only be installed in slot 1, the 1280 can be installed in slots 2 or 3, as shown below.  The diagram below comes from the B420M3 Installation and Service Note.


In my case, the 1280 is installed in slot 3 and therefore, is shown as Adapter 3 in the Equipment tab.

The Selection Preference identifies which vNICs or vHBAs will be mapped onto which adapter.  In this case, I chose Assigned Only because I want the admins to think about where their vNICs are being placed when they create them, rather than just letting them be placed any adapter, possibly to adding a single point of failure to a particular traffic type.  Assigned Only means “only map vNICs/vHBAs to these Virtual Slots if they’re explicitly or statically assigned.

I found that choosing Round Robin or Linear didn’t actually make any difference as to how the vCons were mapped to the physical adapters.  According to the vCon to Adapter Placement for All Other Supported Servers section of the config guide, I should have seen this mapping change, however, I didn’t.  You can use the “show vcon-mapping” command to see this.


Fortunately, it doesn’t really matter in my case so much so I just let it be.  You just need to know what this mapping is with regard to vCon’s and Adapters.  You can use the command above or simply look at the Server adapters on the Equipment tab as shown below.  I see Adapter 1, which is the VIC 1240 and Adapter 3, which is the 1280.


So in total, we have this type of mapping between the hardware and UCSM configurations for my particular case:

  • Virtual Slot 1 –> vCon1 –> Adapter 1 –> VIC 1240
  • Virtual Slot 2 –> vCon2 –> Unused
  • Virtual Slot 3 –> vCon3 –> Adapter 3 –> VIC 1280
  • Virtual Slot 4 –> vCon4 –> Unused

With this information in hand,

  1. What vNICs and vHBAs I have configured
  2. What adapters I have and how they’re mapped from hardware into UCSM

I was able to draw out what I was trying to do.  While the 1240 and 1280 still have connections to each fabric, in order to provide hardware redundancy across the VICs I chose to map the Fabric A vNICs and vHBAs to the 1240 and the Fabric B vNICs and vHBAs to the 1280.  This is just one way to do it, but it’s clear and it’s consistent and it meets my objectives – so I’m good with that.


Modify vNIC/vHBA Placement

Now we assign the Placement Policy to the Service Profile and configure the actual placements.  To do this, we select the Service Profile and go to the Network Tab > Modify vNIC/vHBA Placement.


Select the new Placement Policy at the top and start configuring the vNIC/vHBA mappings.  Again, I worked out above where each vNIC and vHBA would be mapped so this is simply putting it in the configuration.


The full vCon 1 configuration is shown here:


And the full vCon 3 configuration is shown here:


A reset of the Service Profile re-mapped the vNICs and vHBAs.

Before the change, we can see the mapping wasn’t useful at all.


And after the change, it’s nice and orderly and spread across the hardware.


Or in the GUI, we can see the effects of the placement policies by viewing the Desired and Actual Placements.  The screenshots below are from different Service Profiles because I didn’t keep track of which ones I was taking screenshots of.  The first screenshot shows the vNIC placement before correction.  Again, the ESXi management vNICs are placed on Adapter 1 and the vMotion vNICs are placed on Adapter 3.


And after creating placement policies, the Fabric A vNICs are mapped to Adapter 1 and Fabric B vNICs are mapped to Adapter 3.  While I don’t show it, the vHBAs are mapped correctly, too.  You’ll just have to take my word for it. 😉



5 Comments on “Real World Cisco UCS Adapter Placement”

  1. Very curious, B420’s as hypervisors, what workload will they be running and at what configuration?

  2. Hi Tommy,

    So this deployment was for a medical SaaS provider and included a DR site which was essentially half the size of their production site. They’re using UCS Director for configuration of the hardware stack (EMC VNX/VPLEX/RecoverPoint, Nexus 5ks, UCS) and portal access for deploying VMs on top of vSphere for developers.

    I’m unsure of their exact workloads – we worked with a different partner who sold the hardware while we put it together for them. The 420s have 4 procs and 512GB of RAM, though.

    – Mike

  3. Marko says:

    Hi Mike,

    Great stuff, cleared lot of doubts.
    Just wondering if there was any way to test the failure of VNIC;s in this setup?
    For sure you can physically take off one of the adapters or create the service profile which will use only one vCon, but as far as I know you cannot really disable VIC in UCSM while the blade is running?



  4. Liam says:

    Hi Mike,

    What we have found in this config, is that if you add some new vNICS at a later date and if you are using ESXi, the way the vNICS get re-discovered can mess up the vNIC order in ESXI if you rebuild a host. Have you tried that at all?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s