I ran across an issue today that my various sources of troubleshooting (ok, Google) couldn’t help solve – at least not directly. I configured SnapMirror between two disparate systems for a data migration. 16 of the 17 volumes initialized just fine, but I was getting an error on the one volume that had a LUN inside. It was a SnapDrive for Windows LUN, so I knew that just prior to the final cutover I’d have to take a Snapshot via SnapDrive, but I should be able to start the baseline transfer via the standard CLI. Here’s what I was seeing:
ControllerA> snapmirror initialize -S ControllerZ-vif01:vol_server2008 ControllerA:vol_server2008 Transfer started. Monitor progress with 'snapmirror status' or the snapmirror log. Mon May 13 14:21:26 CDT [ControllerA:replication.dst.err:error]: SnapMirror: destination transfer from ControllerZ-vif01:vol_server2008 to vol_server2008 : process was aborted.
Here are the relevant excerpts from my config files – in short, everything was configured correctly, but the initialization wouldn’t start.
The source controller:
ControllerZ> rdfile /etc/snapmirror.allow 10.1.1.8 ControllerA-VIF01-6
ControllerZ> rdfile /etc/hosts #---used for SnapMirror data migration---# 10.1.1.8 ControllerA-VIF01-6
And the destination controller:
ControllerA> rdfile /etc/snapmirror.conf ControllerZ-vif01:vol_server2008 ControllerA:vol_server2008 - - - - -
ControllerA> rdfile /etc/hosts #---used for SnapMirror data migration---# 10.4.1.3 ControllerZ-vif01
The controllers could also ping each other. I ran a traceroute from from the destination to the source forcing the use of specific replication links like this:
ControllerA> traceroute -s ControllerA-VIF01-6 -v ControllerZ-vif01 traceroute to ControllerZ-vif01 (10.4.0.113) from ControllerA-VIF01-6, 30 hops max, 40 byte packets 1 10.1.1.2 (10.1.1.2) 36 bytes to 10.1.1.8 0.000 ms 1.000 ms 0.000 ms 2 ControllerZ-vif01 (10.4.1.3) 36 bytes to 10.1.1.8 0.000 ms 0.000 ms 0.000 ms
I tried to initialize the baseline transfer like so but received an immediate error.
ControllerA> snapmirror initialize -S ControllerZ-vif01:vol_server2008 ControllerA:vol_server2008 Transfer started. Monitor progress with 'snapmirror status' or the snapmirror log. ControllerA> Mon May 13 14:21:26 CDT [ControllerA:replication.dst.err:error]: SnapMirror: destination transfer from ControllerZ-vif01:vol_server2008 to vol_server2008 : process was aborted.
Everything seemed right to me and the googalizer was coming up empty so I was at a bit of a loss. Finally, Matt Oswalt’s article, while not addressing the error specifically, did mention looking at the source controller’s CLI for errors written to the console. That was the key. Here’s what I found:
ControllerZ> Mon May 13 14:27:25 CDT [ControllerZ: replication.src.err:error]: SnapMirror: source transfer from vol_server2008 to ControllerA:vol_server2008 : cannot create incremental snapshot: No space left on device.
Of course, SnapMirror takes a snapshot before each transfer but apparently, there’s not enough space for that snapshot. Looking at the vol options for the offending volume on the source, I see that fractional reserve is set to 100%. The volume is 30GB and the LUN is 15GB. There’s no snapshot reserve allocated, so I see where the problem is. With fractional reserve set to 100% and the above volume and LUN sizes, there is no space in the volume for anything else but the original LUN and the fractional reserve space, hence the “no space left on device” notice.
ControllerZ> vol options vol_server2008 nosnap=on, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off, ignore_inconsistent=off, snapmirrored=off, create_ucode=on, convert_ucode=on, maxdirsize=20971, schedsnapname=ordinal, fs_size_fixed=off, compression=off, guarantee=none, svo_enable=off, svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off, no_i2p=on, fractional_reserve=100, extent=off, try_first=volume_grow, read_realloc=off, snapshot_clone_dependency=off, nbu_archival_snap=off
So I resized the volume to 50GB, gave it 20% snapshot reserve and looked at the results.
The resize is a bit more than what was actually needed, but the the idea was to make it big enough so as not to get the same error. There were no snapshots being taken so I had no historical reference to know how to size the snapshot reserve. You can see that the data space used from the volume is about 30GB, which includes the 15GB LUN and the 100% fractional reserve.
Restarting the SnapMirror initialization did not produce an error and using snapmirror status showed bits being transferred. Huzzah!
Full disclosure: Packt Publishing gave me a free copy of the book in order to review it.
So before receiving this book, I hadn’t taken the time to get cozy with vCloud Director. It was on my list of things to do. Quite honestly, I knew I would be left with Google to find my way with vCD. Fortunately, Packt offered up this gem just in time. This is the first time I’d read one of Packt’s “Instant Starter” books. I didn’t know exactly what to expect but I ended up pleasantly surprised. The book reads a lot like installation notes, like those one would create at work, only better. There are good screenshots throughout as well as explanations of each component. It’s as if the author walks you up to a summit, points to interesting objects on the horizon, then encourages you to explore them on your own. This is the first type of book I’ve read like this. It gets you up and running but leaves many features untouched, but gives you explicit exercises to perform afterward. So it’s almost like a teaser, in that it gives you a taste of vCloud, a slice, but leaves the rest of the pie for you to finish later. I used it to get my vCloud environment running in Workstation in no time. I’ll admit, though, that I was left with wanting more. I had to keep reminding myself of the intention of the author – it wasn’t to walk through every installation and certainly not every configuration piece. It was to bring the reader to a certain point, then let them discover the rest on their own. So in that light, this book meets its goal. I’m impressed with this book and am grateful to Packt for letting me review it. Check the book out here: Instant VMware vCloud Starter
If your normal Right-click > Format > Fill… doesn’t work to add transparency to a Visio object in 2010, try setting the same percentage for both Fill and Line. In my quick testing, it didn’t work with some objects. I’ve been looking for this functionality for a bit and finally found a reference to the workaround here.
I’ve recently needed to configure SPAN a couple times in the lab at work to troubleshoot some issues – or at least to see what I could see. It wasn’t exactly glamorous work, but somebody had to do it. Now, I had to look it up the first time because it had probably been a good year since I’d done it. The document I used is here. Well, the second time I needed to configure SPAN was shortly after the first. I was annoyed that I had to look at the same document and skip over all the paragraphs to get to the commands, then sort out the FC ports and other commands I didn’t need. So for my benefit, and perhaps yours, here’s my short and sweet version of how to configure SPAN on a Nexus 5k.
I’d like to take this opportunity to share a message from ONS 2013 as its conference nears.
Software Defined Networking (SDN) is the buzzword on the mind of every player in the networking and telecom ecosystem; promises to revolutionize networking as we know it and will affect service provider networks, cloud networks and enterprise networks.
Open Networking Summit (ONS) 2013 is the premier conference for SDN and Open Flow and has established itself as the leading event to ‘plug-in’ to SDN.
ONS brings together the entire SDN ecosystem, comprised of thought leaders, business leaders, luminaries, creators, researchers, innovators and engineers, to offer the very highest caliber presentations, tutorials, exhibitions, and latest research to enable the SDN community to interact and share ideas.
After my recent DFW VMUG presentation where I spoke on the topic, a friend emailed me and asked what I thought about OTV.
“You mentioned that you were against OTV. Curious on your take on this, as we are using it across two datacenters using N7K, UCS, NetApp and VMware.”
I’d like to share my response to him here.
Please don’t get me wrong. If one is forced to implement a Layer 2 Data Center Interconnect (DCI), OTV is probably the best solution. Sometimes, L2 connectivity between data centers is a functional requirement – perhaps even a constraint. In these cases, one should look at the benefits and risks of implementing an L2 DCI and then make an informed decision on whether they should continue with such a deployment. Should they choose to deploy OTV, someone needs to accept the risks associated with OTV in its current implementation.
The DFW VMUG has opened registration for its upcoming local meeting.
Sign up here: http://www.vmug.com/e/in/eid=801&source=5
I’d like to thank our meeting sponsors, Nutanix and Zerto, for helping keep the VMUG alive and kicking.
Yours truly will be giving a short presentation at 12:15 about why I worked *not* to have OTV implemented when the bank I worked for stood up its first DR site. I’ll also speak about VXLAN and why it’s not a L2 Data Center Interconnect. I’m sure you won’t want to miss that…
View the complete agenda for the most up-to-date information. We’ll also hold a vBeers following the meeting, so come and say hi.