Hyper-V 2012 R2

If the release date of Windows 8.1 is any indication, Server 2012 R2 is nearing RTM and I’m super excited, despite the fact that there are only one or two features that I’d likely be using.

Most of my interest comes from Hyper-V improvements, especially VHDX online expand. I’ve been slowly converting my VHD’s to VHDX during maintenance windows, and I’m sure glad I’ve been spending the time. Being able to expand the size of my VM disks without downtime is a huge benefit.

Unfortunately it looks like I’m going to have to rebuild my cluster again since you can’t have dissimilar host OS within a cluster. That really sucks, but I take solace in the fact that I can do an upgrade of Server 2012 rather than a complete bare-metal reinstall.

 

There’s still lots of improvements to be made in my Hyper-V environment, starting with backup and disaster recovery. My backup plan from 2012 never really got off the ground due to a variety of issues, but that is going to be picked back up right away. Now my preliminary thoughts (before spending time researching) is to get a second SC847 disk chassis, and set up one inside the LAN for backup using something like AppAssure, Veeam, Unitrends or Altaro. Then replicate that backup repository offsite to the second disk chassis over whatever link I have available. This way my primary backup is done over gigabit, and then the replication can take advantage of deduplication and other replication technologies. Then I’ll add Hyper-V replica to the mix for disaster recovery plans.

So far in my environment I haven’t had to scale up to a 3 node cluster, but I’m budgeting for it this coming fiscal year anyways because it’s going to happen, and I’m excited for that too. It will give me more RAM headroom per host when doing server maintenance, and offer improvements in performance for some of our heavier VMs.

PowerEdge R410 – PERC S300 and Server 2012

I encountered an epic planning fail on my part this morning. During the annual Christmas Maintenance that I perform, I decided to upgrade my Hyper-V cluster to Server 2012, since I have the Datacenter licenses and want to use some of the features.

So I took my time and prepared the upgrade, documenting every step I would take so that things go smoothly.

When I began the process, I booted into the Server 2012 media, loaded the PERC S300 drivers from Dell (Server 2008 R2 should work right?) and got a blue screen.

I did some quick digging, and the S300 is not supported for Server 2012, and never will be.

I can’t believe that I didn’t think to check my RAID card for compatibility before embarking upon this. At this point I don’t want to run my Hyper-V host without RAID (by converting the S300 into a regular SATA controller) so I’ll look at the PERC H200 or H700 from my Dell rep.

 

Why is it that with all the preparations made, there’s always something forgotten?

 

Using Quick Storage Migration for VHDs with DFSR data

As shown in my last post, I recently added some storage to our SAN, and will be moving existing VHD files from our Hyper-V cluster to this new storage.

The unique thing about this is that these VHD’s contain data that is being served with Microsoft DFS and replicated with DFSR. Hopefully word is spreading about DFSR data stores requirements for backup, which include some specific requirements for backup, especially when it comes to snapshots due to the multi-master database DFSR uses. Because of this information, I was a little concerned about the Quick Storage Migration (QSM) and so I started digging.

I eventually came across this blog post that went into detail about how the QSM works; It mentions that it does a snapshot of the VM and creates differencing disks, and eventually the snapshot is merged and VM restarted from that saved state. At this point I was concerned about my DFS data and so I sent an email to the wonderful and always helpful AskDS blog seeking clarification.

 

Here’s the response from Ned Pyle:

My presumption is that this is safe because – from what I can glean – this feature never appears to roll back time to an earlier state as part of its differencing process.

He then went the extra mile and contacted internal Microsoft peers closely related to the QSM feature, who responded:

That’s correct, we don’t revert the machine state to an earlier time. A differencing disk is created to keep track of the changes while the parent vhd is being copied. Once the VHD is copied, the differencing disk is merged into the parent.

 

Based on that I performed a QSM of my 1.6 TB VHD yesterday. It took 12.5 hours to complete, but in the end it was fully successful, with no negative repercussions.

Something interesting to note, is that I had to manually move a different 350GB vhd file to my new storage first, instead of a QSM since I was out of space on the original storage to create the avhd differencing disk. I shut down the VM, transferred the VHD (took about an hour), and then re-pathed it within the VM settings and turned the VM back on.

Following this I received a DFSR error # 2212 that “The DFS Replication service has detected an unexpected shutdown on volume F:”. I’m not sure why this occurred, and I only did the one transfer so I can’t verify that it wasn’t related to some other operation or bad shutdown.

 

Processor & RAM upgrade on Dell R410

Have I said before that I love virtualization? Because I really, really do.

In my original Hyper-V implementation, I used two Dell R410’s, each with 32 GB of RAM (4 sticks) and 1 Xeon 5630 processor. It’s been a little bit of time since then, with some additional VM’s brought online for various services. My benchmarks showed it was time to upgrade the cluster, mostly for RAM failover amounts; I can’t go below 50% available RAM otherwise all the VM’s won’t be able to run on one host.

So I called up my Dell rep, ordered 2 x Xeon 5630 and 8 x 8 GB of RAM, and today installed them.

Intel Xeon

 

The install went very smoothly, and because of the Hyper-V cluster and Live Migration, occurred in the middle of the day without downtime or interruption.

This is the process I used:

  • Manually drained a host (Windows Server 2012 will have this as a feature, which is nice).
  • Performed Windows Updates and a BIOS update from Dell
  • Restarted the server, and entered BIOS setup to ensure latest version applied successfully
  • Turned off server, slide out from the rack (Man do I love the RapidRails).
  • Opened up the chassis, removed the shroud covering processors and RAM
  • Added 4 sticks of RAM
  • Removed the CPU filler, and inserted the new processor.
  • Attached the passive heatsink, and screwed it into the mounts.
  • Turned on the server.

And that’s it! Just like that, I’ve doubled the capacity of my infrastructure, and it took under an hour.

 

Future Technology can’t come soon enough

As I’ve been reviewing my Hyper-V and storage infrastructure at work, I can’t help but wish that future technology was available right now.

Primarily, I wish I had production ready releases of:

  • Windows Server 8 (For Hyper-V 3.0)
  • 802.11ac devices

Within my environment, running on our Dell MD3220i SAN I’ve got a file server VM with multiple VHDs attached. Most of these VHDs correspond to a DFS folder target that is being replicated using DFSR. The problem is that the folder target storing the majority of files is projected to grow beyond 2TB before the end of 2012.

My short term plan is to use a passthrough disk to a new LUN on an MD1220 attached to our SAN, but I’d really like to keep it contained within a VHDX file from Windows Server 8.

The reason for that is BACKUPS. The plan I’m thinking about right now is to use Hyper-V Replica to an offsite server for disaster recovery, with file-level backups of the contents of our VMs to an offsite NAS using some method of dedupe and compression. Again, waiting on Windows Server 8 for that.

The issue with this backup plan is connectivity to our offsite location which is a building across the parking lot from our head office that we lease.

Right now it’s connected by an 802.11a connection at 54Mbps. That’s really too slow to be able to do backups across, especially when we’re looking at just 500GB of Exchange data.

I’d really love to be able to set up an 802.11ac device that provides up to gigabit throughput. While I have read manufacturers such as Broadcom are working on these chips, they’re not commercially available yet.

 

The implementation of just these two things would make me much happier with my infrastructure and give me better scalability for the future.