By request, this post is a reproduction of an internal operational document I handed over for the environment in which I installed it.
The Dell Management Infrastructure consists of two dedicated VMs in addition to relying on vCenter and a database. The two dedicated VMs run three separate Dell applications in a Windows Server 2008 R2 VM and a Linux Virtual Appliance. The three applications are Dell OpenManage (OME), Dell Repository Manager (RM), and the Dell Management Plug-in (DMP) Virtual Appliance itself. An overview of the infrastructure is below.
Even though we’re using the Dell Management Plug-in for vCenter, we use Dell’s OpenManage Essentials (OME) for physical boxes. I want to be able to view ESXi server info from OME, as well, though. After initial configuration, my ESXi servers were showing up as “Unknown” even though they were correctly categorized as “VMware ESX Servers” in OME. This irritated me because I had finally configured my physical servers to show nice green check marks to show all was well but I couldn’t get my ESXi boxes to play nicely. As the ESXi boxes sat as unknown, they also did not have detailed hardware inventories available.
Now, I’ve been unimpressed by Dell’s hardware management platform so far even though the idea of it promises to be a big time saver and monitoring tool. When I have it configured, not only will I get SNMP hardware monitoring, but I should be able to upgrade firmwares and BIOSs remotely. Mostly, I’ve been dissatisfied by the lack of clear and organized configuration steps for, what I consider to be, a pretty standard data center: mostly virtualized with a few physical servers scattered about. As a note, I haven’t yet configured Dell’s management and monitoring stack to keep track of our remote office hardware, but it’s on the list of things to do.
Real quick, let me show you what I’ve come up with.
A tricky configuration piece of the Dell Management Plug-in that I discovered the hard way was that you must log in to vCenter via the vSphere Client with the same name or IP address with which you used to register vCenter with the Dell Management Plug-in Virtual Appliance. And I mean *exactly* the same, perhaps even with an FQDN. You can’t register vCenter in the Dell virtual appliance with an IP address and then turn around and log in to the vSphere Client with your usual server name or, in my case, a DNS alias or CNAME.
As you can see above, I’ve created a CNAME record for the first of my vCenters that are in a Linked-Mode group. I’ve named it vCenter. This is how I log into the vSphere Client – by just typing vCenter in the Name/IP address field. When I first registered the Dell plug-in via the virtual appliance, I registered the first vCenter server by its FQDN – let’s call it, myvcenter.company.net. Going to the Dell plug-in in the vSphere Client you get a nice error stating that the Dell Management Plug-in cannot access vCenter. Showing details gives you nothing, but don’t despair quite yet.
I’ve had occasion recently to implement many vSphere 4.1 environments for a customer. There’s a lot to learn during these deployments and many worthy blog posts are just waiting to be written. But one especially comes to mind mainly because of its temporal relation to a recent query I had regarding a BIOS setting for a Dell PowerEdge R710. The exact query doesn’t matter, but what’s important is that I ran across Marek.Z’s blog, Default Reasoning, and this post in search of an answer. His post regarding vSphere 4.x BIOS settings and best practices interested me in writing a BIOS best practices post for vSphere 5. This is going to be very similar to vSphere 4.x, but you’ll notice I’ve included explanations as to why these settings are best suited to vSphere 5 environments. Some of these settings may be obvious, while others, like NUMA, C1E, and Memory, may not be. Especially for these, I’ve included the results of my research.