Binding iSCSI Port Names to VMware Software iSCSI Initiator – ESXi 4.1
Posted: October 3, 2011 Filed under: Networking, Storage, VMware | Tags: 4.1, bind, esxcli, iscsi, iscsi initiator, port groups, software iscsi initiator, vmware, vsphere Leave a commentFor my notes, I’m sharing what I’ve found searching the ‘net to bind VMkernel NICs to VMware’s built-in iSCSI software initiator in ESXi 4.1 I know ESXi 5.0 has changed this process to a nice GUI, but we’re stuck with the CLI in 4.1.
If you’re configuring jumbo frames as I’ve shown in a previous post, bind the VMkernel NICs after configuring jumbo frames.
Assuming you have two uplinks for iSCSI traffic, on the vSwitch of your iSCSI port group, set one uplink, temporarily, to unused. You’ll also want to note the vmhba# of the software iSCSI adapter. You can view this from the Configuration tab > Storage Adapters and viewing the iSCSI Software Adapter. You’ll also need to note the VMkernel NIC names of each iSCSI port group. You can view these from the Configuration tab > Networking and looking at the iSCSI port group. It will show the iSCSI port group name, the vmk#, IP address, and VLAN if you have one configured. Then from a CLI, either via the console or SSH, execute the following commands for each iSCSI port name:
Example: esxcli swiscsi nic add -n vmk# -d vmhba#
How to Configure Jumbo Frames for an iSCSI Port Group – ESXi 4.1
Posted: September 29, 2011 Filed under: Networking, VMware | Tags: 4.1, esxcfg, esxcfg-vmknic, esxcfg-vswitch, esxi, frames, iscsi, jumbo, jumbo frames, portgroup, vmkernel, vmknic Leave a commentIf you want to enable jumbo frames on an iSCSI port group in ESXi 4.1, you’ll need to make configuration changes at the vSwitch and VMkernel NIC level. Through trial and error, I found that I had to create the iSCSI port group from the command line instead of just enabling jumbo frames on an already existing port group. At first, I already had an iSCSI port group created. I did this via the vSphere Client. But then enabling jumbo frames from the CLI didn’t work. I had to delete the iSCSI port group first, then recreate from the CLI. Note that these commands are case sensitive, including the names of vSwitches, port groups, and VMkernel NICs. These command were completed via an SSH session directly to an ESXi 4.1 host.
Configuring Cisco Nexus 5020 and 2224 Fabric Extenders for Virtual Port Channeling (vPC)
Posted: September 26, 2011 Filed under: Cisco Nexus, Networking | Tags: 2000 series, 2224, 5000 series, 5020, Cisco, Nexus, port channeling, virtual, virtual port channeling, vPC Leave a commentSo it’s been a long time since I’ve posted. We’ve finally finished our data center site surveys and we’re very close to starting the implementation phase. In preparation for implementation, we’ve begun testing configurations, playing with possibilities, and generally, seeing what the given hardware can do. We don’t exactly know what the architecture design team will give us to implement, but our pre-work will let us get a feel for the kinds of things we’ll be doing. For instance, we know we’ll be using virtual port channels and fabric extenders, so we’ve configured these on several of our Nexus switches. We’ll probably blow away the configs when we officially start anyways, but again, this gives us a chance to get our hands on the equipment and practice some of the same configs we’ll be using later.
The Design
Our test design will have two 5020s with 40 10GbE SFP ports plus a 6 port 10GbE SFP daughter card and two 2224 fabric extenders, FEXs, with 24 1GbEcopper ports. It will not be a cross connected FEX design because we will use servers with dual-homed 10GbE converged network adapters, CNAs, which we plan on cross connecting. Cisco does not support a design where the FEXs and the servers are both cross connected. Our test design looks the diagram below. Note that the diagram shows the server connected to each FEX via copper ports. We’ll actually be connecting each server via CNA twinax cables to the 10 Gb ports on the 5020s.
How to Configure LUN Masking with Openfiler 2.99 and ESXi 4.1
Posted: September 11, 2011 Filed under: Storage, VMware | Tags: esxi, lun masking, openfiler, vmware 13 CommentsNote: If you’d like to see screenshots for this article, check out this other post.
I’ve been building a test environment to play with vSphere 4.1 before we begin our implementation. In order to experiment with the enterprise features of vSphere I needed shared storage between my ESXi hosts. As always, I turned to Openfiler. Now, I’ve deployed Openfiler before, but it was just one ESXi host and a single LUN. It was easy. There were plenty of good walkthroughs on how to set this up in such a way. But using the Google-izer, I couldn’t find a single page that explained how to configure Openfiler for shared storage between multiple hosts. When I finally got it working, I felt accomplished and decided to document the process for future reference. Maybe someone out there will find it useful, too.
VMware OS Compatibility – Upgrading Windows
Posted: July 4, 2011 Filed under: Tid-bits | Tags: Compatibility, vmware, Windows, Workstation Leave a commentIt’s no fun doing research all day on your only day off, so I took a minute to read Mike D’s VMware blog over at http://www.mikedipetrillo.com/ where I came across some interesting and fun stuff. The “stuff” is actually just an embedded YouTube video but one that should send you back in time for a few minutes to reminisce about simpler days. The video shows the upgrade of every major version of Microsoft Windows since — get this — Windows 1.0! He actually started with MS-DOS 5.0 because the earliest Windows versions required it. One interesting tid-bit is that the launch of Windows 1.0 predates that of the VGA standard and its numerous analog extensions we’ve all come to know and love. Instead, it uses EGA.
File System Alignment in Virtual Environments
Posted: June 24, 2011 Filed under: NetApp, Storage, VMware, Windows | Tags: alignment, file system, file system alignment in virtual environments, storage, vmware 1 CommentIn speaking to my fellow Implementation Engineers and team leads, I’ve come to learn file system misalignment is a known issue in virtual environments and can cause performance issues for virtual machines. A little research has provided an overview of the storage layers in a virtualized environment, details on the proper alignment of guest file systems, and a description of the performance impact misalignment can have on the virtual infrastructure. NetApp has produced a white paper that speaks to file system alignment in virtual environments: TR 3747, which I’ve reproduced below.
In any server virtualization environment using shared storage, there are different layers of storage involved for the VMs to access storage. There are different ways shared storage can be presented for the hypervisor and also the different layers of storage involved.
VMware vSphere 4 has four ways of using shared storage for deploying virtual machines:
Preparing for a data center site survey
Posted: June 19, 2011 Filed under: Site Surveys Leave a commentBefore we ever start installing hardware or configuring software, we’re going to be conducting site surveys. Although preliminary surveys have already been done, we’ll be going a bit more in-depth to determine any last requirements for enterprise virtualization that are lacking. Initially, my thoughts were centered on the important things, like existing servers, storage, and networking assets. But a CISSP on our team created a document that goes much deeper and would catch more deficiencies that would impact a successful data center deployment. I’ll highlight the items of the likely less-thought-about issues that could hamper us if not properly accounted for:
Power
Distribution points, availability (how much downtime per day/week/month), how often main-line power is interrupted, are there brown-outs or line-spikes? Load factors for UPS units and current load (can the current UPS handle the hardware?), rack-mounted or facility UPS, UPS run-time under load, UPS battery life-cycle and maintenance, back-up generators, rack power distribution unit types and available receptacles with at least two per rack, each connected to different UPSs/distribution points/circuits
HVAC and Facility Air Handling
Capacity of chillers vs. heat output of equipment, is HVAC on UPS? Are there portable chillers in use? Rack flow capacity, rack row layout to include hot/cold rows, humidity
Fire Suppression
Type and location of suppressant, maintenance checklists, potential fire hazards
Physical Security
Types of access controls on entrances to data center (cipher, key, biometric, etc.), presence of camaras and capability (thermal, IR, pan, tilt, zoom), is building continuously occupied? Do people work in the data center? Distance of building location from hazards (near airport, near water but at lower level, prominent location, near high-traffic area), who has physical access? Verify accountability and access controls, emergency lighting during power-outage
Rack Configuration
Used/free RU space, power/signal grounding of racks, quality of existing wiring for power, networking, KVM, are racks lockable? Heat load/power draw for currently installed equipment, raised floor and capacity of floor, rack stability and anchoring
And I can’t help but include some of the storage, networking, and server notes, as well.
Storage
Makes/models in use, drive configuration and capacity, IP info, volume names, hostnames, domains, DR sites, IOps, quotas, interface names, 3rd party OS and apps, MAC/IP iSCSI addresses, protocols
Networking
Makes/models in use, used capacity, 10Gbps capable, IOS versions, management capability (WhatsUp Gold, SolarWinds, CNA), WAN connectivity between sites, topology and protocols, speed/latency, availability of fibre channel/fabric switching
Servers
Makes/models in use, hardware specs, 10Gbps capable, system performance baseline (CPU/memory/IO), OS versions, currently installed stand-alone ESX/ESXi, P2V viability, for virtual servers: ESX/ESXi version, location of .vmdk files, resource allocation/limits/reservations, ESX/ESXi performance baseline (CPU/memory/IO), attached storage details (LUNs, capacity, RAID levels, port assignments/WWNs), protocols supported, V2V viability
Introducing Mike Brown…well, virtually.
Posted: June 19, 2011 Filed under: Introduction Leave a commentWelcome to VirtuallyMikeBrown. This is my place on the interwebs to track my experiences as a new Virtualization Implementation Engineer. I’ve started a new job that is a huge step up for me from my every day Windows systems administration. I love my chosen career in IT and I’m always looking for the next challenge. I’m beginning my fourth year in IT and I’ve come a long way – discovering the fun in technology, meeting some great folks along the way, and getting paid to do a job I love. It’s hard to ask for more. This site will document my on-the-job experiences, thought-processes, and learning as I implement, as a member of a team, enterprise virtualization for a very large customer in one of the world’s most austere environments. I hope you’ll join me as I document, learn, and have fun.
All the best,
Mike




