Good day my friends! Good day it is, indeed! What makes it a good day, you ask? Well, for one, I’m being audited for the first time in my life. No, this isn’t an IRS audit (although I’m sure that would be more pleasant for me). This is an audit of my virtual infrastructure. I assume people, for some odd reason, like to know their money is entrusted to competent folks (see, I work for a bank) that will safeguard it from the evils of the Internets, like this guy à
In response to Miguel’s post, here are my thoughts:
I’m sure at least one of the VMware dudes Miguel was talking to was once a Windows System Administrator. I’m also sure that that same VMware dude cringed at the thought of needlessly putting multiple services on a single VM. He probably thought that as long as the customer had enough money for Windows Server licenses, compute and disk resources, that one should obviously separate each service into their own server. Now, to take a step back, let us say that, yes, it certainly is possible to put all the services you mentioned, vCenter, SQL 2008, VUM, and maybe even SRM on the same box, whether virtual or physical. But of course, whether this is possible or not is not in question. It’s whether it should or should not be done in the first place. I’m going to pull out the age old consultant’s answer and say, “It depends.”
It depends on if the customer has the budget for more Windows or SQL licenses. Does the customer have the compute and disk resources for several more servers? Is there already an existing SQL box or cluster that could be used? Is a DBA on staff, or at least a competent Windows Server admin? Does the customer’s environment even need a full blown SQL installation or would SQL Express do fine?
Now I’m coming from a background of government contracting where money is usually thrown at such projects. Resources for such an implementation are little thought about because they’re going to be there no matter what. This question could impact SMBs more, but probably not large corporations.
I think there are certainly right and wrong ways to implement based on circumstances. On the one hand, if you have the licenses, compute, disk, and administrative resources, I say absolutely, put each service on it’s own separate box. In more constrained environments, you may need to double up two or more services.
That’s not the least of it. Recovering from a failed VM will cost you less in time, effort, and hopefully, money. With an “all your eggs in one basket” approach, if one VM goes down, is somehow unrecoverable, then you’ve lost a lot of data. Separating your services reduces the liklihood that any one VM failure/loss will result in mutlitple services lost.
So I was having a discussion with a few fellow VMware dudes, and we were discussing the vCenter installation methods. One train of thought is to install vCenter, VUM, SQL 2008,, and SRM on 1 VM with 2 vCPUs, 4 GB of memory an a 100 GB drive, Monitor for performance and adjust as required by analyzing the performance data. I have alwbeen doing installations this way lately without issue. I have also done installations on dedicated SQL boxes \ VMs. I have gotten good performance out of the environment with having all services on a single VM. In larger environments of 20 or more hosts and 300 + VMs, I have used a dedicated SQL server. The SRM documetation recommends a separate server for the SRM installation, but I have not seen any issues with it on the same box, and there was not any performance degradation in an…
View original post 147 more words
There are a several good points made my new blogging buddy, Miguel. Number one, you don’t include in your design features for the sake of features. This may seem obvious, but perhaps for a passionate (maybe overzealous!) VMware Architect, implementing features on which on-site staff are not proficient or can’t manage is not a benefit. As Miguel shares in this “palm-to-face” anecdote, such features in the hands of untrained staff can have the opposite effect for which they’re designed. So take into account the staff’s abilities before including advanced features in your design. Number two, communication is key in any environment. Communicating to the customer the gravity of the decisions they make in regards to what’s included in the design and certainly sharing planned maintenance times with all stakeholders. A communication strategy and change control process are key to making this work. Number three, as Miguel shared with me, if an admin is looking at his virtual infrastructure like a hog looks at a wristwatch, well, things are pretty bad. And finally, always remember: VMware’s easy.
I had a long-term project at a customer site where I was to analyze, design, and architect a solution based on the equipment, environment, and requirements. Before I rolled in to the customer site as the new VMware SME, there had been a recommendation by a junior and recent VCP to implement distributed switching, linked vCenters and a few other feature sets of VMware and NetApp. There was not any experience with distributed switching by the on-site staff and their exposure to VMware was minimal, although many thought themselves as experts after a few weeks with the product. I kept hearing the comment that VMware was easy. I recommended a hybrid solution with the MC using standard switching, and VM network\storage on distributed switching as a compromise to a fully distributed solution. They decided against this even after I presented them with the advantages.
A few weeks later they had…
View original post 378 more words
Before we ever start installing hardware or configuring software, we’re going to be conducting site surveys. Although preliminary surveys have already been done, we’ll be going a bit more in-depth to determine any last requirements for enterprise virtualization that are lacking. Initially, my thoughts were centered on the important things, like existing servers, storage, and networking assets. But a CISSP on our team created a document that goes much deeper and would catch more deficiencies that would impact a successful data center deployment. I’ll highlight the items of the likely less-thought-about issues that could hamper us if not properly accounted for:
Distribution points, availability (how much downtime per day/week/month), how often main-line power is interrupted, are there brown-outs or line-spikes? Load factors for UPS units and current load (can the current UPS handle the hardware?), rack-mounted or facility UPS, UPS run-time under load, UPS battery life-cycle and maintenance, back-up generators, rack power distribution unit types and available receptacles with at least two per rack, each connected to different UPSs/distribution points/circuits
HVAC and Facility Air Handling
Capacity of chillers vs. heat output of equipment, is HVAC on UPS? Are there portable chillers in use? Rack flow capacity, rack row layout to include hot/cold rows, humidity
Type and location of suppressant, maintenance checklists, potential fire hazards
Types of access controls on entrances to data center (cipher, key, biometric, etc.), presence of camaras and capability (thermal, IR, pan, tilt, zoom), is building continuously occupied? Do people work in the data center? Distance of building location from hazards (near airport, near water but at lower level, prominent location, near high-traffic area), who has physical access? Verify accountability and access controls, emergency lighting during power-outage
Used/free RU space, power/signal grounding of racks, quality of existing wiring for power, networking, KVM, are racks lockable? Heat load/power draw for currently installed equipment, raised floor and capacity of floor, rack stability and anchoring
And I can’t help but include some of the storage, networking, and server notes, as well.
Makes/models in use, drive configuration and capacity, IP info, volume names, hostnames, domains, DR sites, IOps, quotas, interface names, 3rd party OS and apps, MAC/IP iSCSI addresses, protocols
Makes/models in use, used capacity, 10Gbps capable, IOS versions, management capability (WhatsUp Gold, SolarWinds, CNA), WAN connectivity between sites, topology and protocols, speed/latency, availability of fibre channel/fabric switching
Makes/models in use, hardware specs, 10Gbps capable, system performance baseline (CPU/memory/IO), OS versions, currently installed stand-alone ESX/ESXi, P2V viability, for virtual servers: ESX/ESXi version, location of .vmdk files, resource allocation/limits/reservations, ESX/ESXi performance baseline (CPU/memory/IO), attached storage details (LUNs, capacity, RAID levels, port assignments/WWNs), protocols supported, V2V viability