I was recently given the privilege to review Packt Publishing’s recent book about vSphere design. I was immediately pleased to see that recent VCDX (graduate? achiever?) Hersey Cartwright of #vBrownbag fame was the sole author. I always appreciate knowing what I’m about to put in my brain came from a trustworthy source. I see in his author bio, though, early in the book, that he’s “only” recognized as a VCAP, not a VCDX (VCAPs are all-stars to begin with, dont’ get me wrong). So he must have at least started working on this before he achieved rock-star status. I couldn’t help but think as I read on how much writing this book must have helped his VCDX attempt.
I’ve read a lot, I mean a lot, of VMware books and articles and blog posts – just about everything I can get my hands on – and everything in this book I kept nodding along with. There were many times Hersey would broach a subject and I’d immediately look for him to cover those oh-so-important caveats. Sure enough, he covered them. I was very happy to see that we were on the same page.
So with respect to design books, this is essentially the 3rd of its kind I’ve read. The first, of course, was the Sybex vSphere Design (both editions), then I was very pleased to read VMware Press’s Managing and Optimizing VMware vSphere Deployments, which, while not strictly design-focused, hit on many design features nonetheless. Each is excellent and I recommend them. What makes Hersey’s different is that it’s short and to the point (vSphere Design is over 500 pages). This book is under 250 pages but packs in the relevant information you need be a good architect or designer. Most importantly, let me emphasize this
Hersey doesn’t give you a fish in this book. He teaches you to fish.
What I mean by that is in each section, he’s not simply listing the answers you’re looking for to design a redundant virtual network or to build reliable storage – he couldn’t possibly. What I feel he does throughout is explain the concepts and then teaches you to ask better questions that lead to a good design. That’s not quite anything like what I’ve read in any other VMware book. I don’t feel Hersey wastes a sentence. An additional feature of this book, that also makes it unique from others I’ve read, is that it discusses how to build documentation to support a vSphere design. It’s not coincidence that Hersey mentioned each type of document that is likely needed in a successful VCDX defense. Congratulations, Hersey – you’ve made a one-of-a-kind book. Thanks for sharing.
Thanks to Twitter and Patrick Kremer’s article, I caught the recent news that starting 10 March 2014, VMware will require VCPs to recertify every 2 years to keep their certification current. You can read VMware’s release here. I took the time to read the comment threads on Patricks’ and other’s blogs to get an idea of folks’ reactions. Since you asked, here are my thoughts on the subject.
I shouldn’t have to say that it’s obviously VMware’s prerogative to change or create new policies regarding their certification programs. This shouldn’t be a topic of conversation.
I was graciously given the opportunity to read and review vSphere High Performance Cookbook, written by Prasenjit Sarkar (@stretchcloud) and published by Packt Publishing, whose subtitle states it has Over 60 recipes to help you improve vSphere performance and solve problems before they arise. Gulping down its chapters was easy after seeing that Prasenjit’s recipes included fixes for such common, and some not so common, misconfigurations or lack thereof.
In response to Miguel’s post, here are my thoughts:
I’m sure at least one of the VMware dudes Miguel was talking to was once a Windows System Administrator. I’m also sure that that same VMware dude cringed at the thought of needlessly putting multiple services on a single VM. He probably thought that as long as the customer had enough money for Windows Server licenses, compute and disk resources, that one should obviously separate each service into their own server. Now, to take a step back, let us say that, yes, it certainly is possible to put all the services you mentioned, vCenter, SQL 2008, VUM, and maybe even SRM on the same box, whether virtual or physical. But of course, whether this is possible or not is not in question. It’s whether it should or should not be done in the first place. I’m going to pull out the age old consultant’s answer and say, “It depends.”
It depends on if the customer has the budget for more Windows or SQL licenses. Does the customer have the compute and disk resources for several more servers? Is there already an existing SQL box or cluster that could be used? Is a DBA on staff, or at least a competent Windows Server admin? Does the customer’s environment even need a full blown SQL installation or would SQL Express do fine?
Now I’m coming from a background of government contracting where money is usually thrown at such projects. Resources for such an implementation are little thought about because they’re going to be there no matter what. This question could impact SMBs more, but probably not large corporations.
I think there are certainly right and wrong ways to implement based on circumstances. On the one hand, if you have the licenses, compute, disk, and administrative resources, I say absolutely, put each service on it’s own separate box. In more constrained environments, you may need to double up two or more services.
That’s not the least of it. Recovering from a failed VM will cost you less in time, effort, and hopefully, money. With an “all your eggs in one basket” approach, if one VM goes down, is somehow unrecoverable, then you’ve lost a lot of data. Separating your services reduces the liklihood that any one VM failure/loss will result in mutlitple services lost.
So I was having a discussion with a few fellow VMware dudes, and we were discussing the vCenter installation methods. One train of thought is to install vCenter, VUM, SQL 2008,, and SRM on 1 VM with 2 vCPUs, 4 GB of memory an a 100 GB drive, Monitor for performance and adjust as required by analyzing the performance data. I have alwbeen doing installations this way lately without issue. I have also done installations on dedicated SQL boxes \ VMs. I have gotten good performance out of the environment with having all services on a single VM. In larger environments of 20 or more hosts and 300 + VMs, I have used a dedicated SQL server. The SRM documetation recommends a separate server for the SRM installation, but I have not seen any issues with it on the same box, and there was not any performance degradation in an…
View original post 147 more words
There are a several good points made my new blogging buddy, Miguel. Number one, you don’t include in your design features for the sake of features. This may seem obvious, but perhaps for a passionate (maybe overzealous!) VMware Architect, implementing features on which on-site staff are not proficient or can’t manage is not a benefit. As Miguel shares in this “palm-to-face” anecdote, such features in the hands of untrained staff can have the opposite effect for which they’re designed. So take into account the staff’s abilities before including advanced features in your design. Number two, communication is key in any environment. Communicating to the customer the gravity of the decisions they make in regards to what’s included in the design and certainly sharing planned maintenance times with all stakeholders. A communication strategy and change control process are key to making this work. Number three, as Miguel shared with me, if an admin is looking at his virtual infrastructure like a hog looks at a wristwatch, well, things are pretty bad. And finally, always remember: VMware’s easy.
I had a long-term project at a customer site where I was to analyze, design, and architect a solution based on the equipment, environment, and requirements. Before I rolled in to the customer site as the new VMware SME, there had been a recommendation by a junior and recent VCP to implement distributed switching, linked vCenters and a few other feature sets of VMware and NetApp. There was not any experience with distributed switching by the on-site staff and their exposure to VMware was minimal, although many thought themselves as experts after a few weeks with the product. I kept hearing the comment that VMware was easy. I recommended a hybrid solution with the MC using standard switching, and VM network\storage on distributed switching as a compromise to a fully distributed solution. They decided against this even after I presented them with the advantages.
A few weeks later they had…
View original post 378 more words
So far, our Physical-to-Virtual migrations of Exchange 2003 on x86 Server 2003 Enterprise boxes have gone mostly smoothly – until this evening, that is. In the past, a failure soon after the P2V process started was resolved with a reboot or by disabling the TCP Offload Engine on the Broadcom NICs (this was easily accomplished with the cmd.exe command netsh int ip set chimney DISABLED).
This evening’s P2Vs were a bit more challenging.