I was recently designing a vSphere Replication and SRM solution for a client and I stated we would use static routes on the ESXi hosts. When asked why, I was able to 1. discuss why the default gateway on the management network wouldn’t work and 2. present some options as to how we could separate the vSphere Replication traffic in a way that would allow flexibility in throttling its bandwidth usage.
You won’t see listed here Network I/O Control because this particular client didn’t have Enterprise Plus licensing and therefore wasn’t using a vDS. In addition, this client was using a fibre channel SAN on top of Cisco UCS with only a single VIC in his blades. This configuration doesn’t work well with NIOC because it doesn’t take into account FC traffic which is sharing bandwidth with all the Ethernet traffic NIOC *is* managing.
I’m often asked by my clients the best way to configure NetApp igroups when connecting to VMware VMFS LUNs, especially after I deploy a new system for them and I’m training them on their use. I appreciate the question because it means someone’s actually thinking through why something is configured the way it is rather than just throwing something together.
So this is what I see a lot of out in the field. Single igroups are created with multiple initiators from multiple hosts. This can be a problem, though, as I’ll show you. Functionally, this configuration will work – each host will be able to see each LUN, all things being equal. The problem arises when you want to either 1. remove a host from the igroup or 2. stop presenting a LUN to only a subset of hosts.
Read the rest of this entry »
I had the opportunity recently to deploy 18 B420 M3 blades across two sites. Having only deployed half width blades over the last two years, I had to change my usual Service Profile configuration for ESXi hosts to ensure the vNICs and vHBAs were properly spread across the two VICs installed in each blade. Each B420 had a VIC 1240 and a VIC1280. The Service Profile for the blades includes six vNICs and two vHBAs. The six vNICs were used to take advantage of QoS policies at the UCS-level. The six vNICs configured included:
- vNIC FabA-mgmt
- vNIC FabA-vMotion
- vNIC FabA-VM-Traffic
- vNIC FabB-mgmt
- vNIC FabB-vMotion
- vNIC FabB-VM-Traffic
Packt Publishing is celebrating 10 years publishing its technical tomes and they’re inviting everyone to celebrate with them. While this post is coming out at the tail end of the promotion, you still have time to get in on the action. It’s good until July 5th.
You can buy as many books as you like for $10 each. Check out their deals here:
I was recently given the privilege to review Packt Publishing’s recent book about vSphere design. I was immediately pleased to see that recent VCDX (graduate? achiever?) Hersey Cartwright of #vBrownbag fame was the sole author. I always appreciate knowing what I’m about to put in my brain came from a trustworthy source. I see in his author bio, though, early in the book, that he’s “only” recognized as a VCAP, not a VCDX (VCAPs are all-stars to begin with, dont’ get me wrong). So he must have at least started working on this before he achieved rock-star status. I couldn’t help but think as I read on how much writing this book must have helped his VCDX attempt.
I’ve read a lot, I mean a lot, of VMware books and articles and blog posts – just about everything I can get my hands on – and everything in this book I kept nodding along with. There were many times Hersey would broach a subject and I’d immediately look for him to cover those oh-so-important caveats. Sure enough, he covered them. I was very happy to see that we were on the same page.
So with respect to design books, this is essentially the 3rd of its kind I’ve read. The first, of course, was the Sybex vSphere Design (both editions), then I was very pleased to read VMware Press’s Managing and Optimizing VMware vSphere Deployments, which, while not strictly design-focused, hit on many design features nonetheless. Each is excellent and I recommend them. What makes Hersey’s different is that it’s short and to the point (vSphere Design is over 500 pages). This book is under 250 pages but packs in the relevant information you need be a good architect or designer. Most importantly, let me emphasize this
Hersey doesn’t give you a fish in this book. He teaches you to fish.
What I mean by that is in each section, he’s not simply listing the answers you’re looking for to design a redundant virtual network or to build reliable storage – he couldn’t possibly. What I feel he does throughout is explain the concepts and then teaches you to ask better questions that lead to a good design. That’s not quite anything like what I’ve read in any other VMware book. I don’t feel Hersey wastes a sentence. An additional feature of this book, that also makes it unique from others I’ve read, is that it discusses how to build documentation to support a vSphere design. It’s not coincidence that Hersey mentioned each type of document that is likely needed in a successful VCDX defense. Congratulations, Hersey – you’ve made a one-of-a-kind book. Thanks for sharing.