Hyper-V 2012 migration to R2

Myself and a co-worker just completed an upgrade of our 2-node Server 2012 Hyper-V cluster to a 3-node Server 2012 R2 cluster, and it went very smoothly.

I’ve been looking forward to some of the improvements in Hyper-V 2012 R2, in addition to a 3rd node which is going to be the basis for our Citrix XenApp implementation (with an nVIDIA GRID K1 GPU).

I’ve posted before about my Hyper-V implementation which was done using iSCSI as the protocol but direct connections rather than through switching, since I only had 2 hosts.

For this most recent upgrade I needed to add a 3rd host, which meant a real iSCSI SAN. Here’s the network design I moved forward with:

Server 2012 R2 Network Design
click for big

 

This time I actually checked compatibility of my hardware before proceeding, and found no issues to be concerned about.

The process for the upgrade is described below, which includes the various steps required when 1) renaming hosts in use with MD3220i, and 2) converting to iSCSI SAN instead of direct connect:

Before maintenance window

  • Install redundant switches in the rack (I used PowerConnect 5548’s)
  • Live Migrate VMs from Server1 to Server2
  • Remove Server1 from Cluster membership (Evict Node)
  • Wipe and reinstall Windows Server 2012 R2 on Server1
  • Configure Server1 with new iSCSI configuration as documented
  • Re-cable iSCSI NIC ports to redundant switches
  • Create new Failover Cluster on Server1
  • From Server1 run “Copy Cluster Roles” wizard (previously known as “Cluster Migration Wizard”)
    • This will copy VM configuration, CSV info and cluster networks to the new cluster

Within maintenance window

  • When ready to cut over:
    • Power down VM’s on Server2.
    • Make CSVs on original cluster Offline
    • Power down Server2
  • Remap host mappings for each server in Modular Disk Storage Manager (MDSM) to “unused iSCSI initiator” after rename of host, otherwise you won’t find any available iSCSI disks
  • Reconfigure iSCSI port IP addresses for MD3220i controllers
  • Add host to MDSM (for new 3rd node)
  • Configure iSCSI Storage on Server1 (followed this helpful guide)
  • On Server1, make CSV’s online
  • Start VMs on Server1, ensure they’re online and working properly

 

At this point I had a fully functioning, single-node cluster within Server 2012 R2. With the right planning you can do this with 5-15 minutes of downtime for your VMs.

Next I added the second node:

  • Evict Server2 from Old Cluster, effectively killing it.
  • Wipe and reinstall Windows Server 2012 on Server2
  • Configure Server2 with new iSCSI configuration as documented
  • Recable iSCSI NICs to redundant switches
  • Join Server2 to cluster membership
  • Re-allocate VMs to Server2 to share the load

I still had to reset the preferred node and failover options on each VM.

Adding the 3rd node followed the exact same process. The Cluster Validation Wizard gave a few errors about the processors not being the exact same model, however I had no concerns there as it is simply a newer generation Intel Xeon.

 

The tasks remaining for me are to upgrade the Integration Services for each of my VMs, which will require a reboot so I’m holding off for now.

Hyper-V 2012 R2

If the release date of Windows 8.1 is any indication, Server 2012 R2 is nearing RTM and I’m super excited, despite the fact that there are only one or two features that I’d likely be using.

Most of my interest comes from Hyper-V improvements, especially VHDX online expand. I’ve been slowly converting my VHD’s to VHDX during maintenance windows, and I’m sure glad I’ve been spending the time. Being able to expand the size of my VM disks without downtime is a huge benefit.

Unfortunately it looks like I’m going to have to rebuild my cluster again since you can’t have dissimilar host OS within a cluster. That really sucks, but I take solace in the fact that I can do an upgrade of Server 2012 rather than a complete bare-metal reinstall.

 

There’s still lots of improvements to be made in my Hyper-V environment, starting with backup and disaster recovery. My backup plan from 2012 never really got off the ground due to a variety of issues, but that is going to be picked back up right away. Now my preliminary thoughts (before spending time researching) is to get a second SC847 disk chassis, and set up one inside the LAN for backup using something like AppAssure, Veeam, Unitrends or Altaro. Then replicate that backup repository offsite to the second disk chassis over whatever link I have available. This way my primary backup is done over gigabit, and then the replication can take advantage of deduplication and other replication technologies. Then I’ll add Hyper-V replica to the mix for disaster recovery plans.

So far in my environment I haven’t had to scale up to a 3 node cluster, but I’m budgeting for it this coming fiscal year anyways because it’s going to happen, and I’m excited for that too. It will give me more RAM headroom per host when doing server maintenance, and offer improvements in performance for some of our heavier VMs.

DFSR Crash and replacement search

A few weeks ago my DFSR database crashed, and crashed hard. I won’t go into the details of my troubleshooting steps, mostly because I didn’t make good notes while it was on-going and I was very sleep deprived. Suffice to say I spent many hours trying to resolve it, and wasn’t successful.

It was a Tuesday night that I disabled the DFS folder targets on my branch office servers, forcing all of my remote users to access our namespace over the VPN. I was hesitant to do it since our fastest link is 5 Mbps but it was the only way to ensure data integrity. Following that we needed to manually sync the data from our spoke servers to the hub since there had been 3 days of non-replication.

While that was going on, I began looking for a solution to our problem. Our problem is that we need to have those within our branch offices working on the same files as the head office. These are files used across a variety of applications including AutoCAD and ArcGIS, so our users are expecting fast access to data that can be quite large.

This is something that is difficult to find information on; not many people are talking about how they handle branch office file collaboration especially in a larger company.

In my case I tested PeerSync for a few days to see if it could replace DFSR however there were a few problems we encountered with our environment which make it unsuitable. In the end I re-implemented DFSR across our 2.5TB of data, and just waited for initial replication. This took another week.

Since then DFSR has been running smoothly, however I’m still looking for a replacement that will be scalable for my company as we grow in offices and data size.

 

Right now, I’m considering two options:

  • Remote Desktop / VDI
  • WAN Acceleration

Both would be a fairly substantial capital investment initially, but with the growth my company has seen it is inevitable.  It has been a long time since my last post because I’ve worked so much overtime lately with this issue in the midst of other projects occurring, and I just haven’t had the mental capacity to sit down and write.

In the next week or two I’ll be noting my thoughts on the two options above.

 

 

 

 

Set default value for lookup parameter in Lightswitch

I’ve begun developing an IT Inventory project within Visual Studio Lightswitch, both for use and to learn the product so that I can use it on a new line-of-business application for my company.
While working through this, I came across the need to set a default value for a parameter and I struggled for a long time to find the answer.

 

Luckily there are great resources like Stackoverflow, which has given me an answer in less than 24 hours!

My full question (and the answer) can be found here: http://stackoverflow.com/questions/13710308/set-default-value-for-parameter-in-lightswitch but below is a little background on what I’m trying to do.

 

There are multiple “Companies” to which our assets can be assigned, but 90% of the assets belong to CompanyA. When opening my default screen that displays the list of inventory items, I want it to default to CompanyA, with an option to change it afterwards.

So I’ve created a Query on my data source, assigned a parameter, and then added the proper data objects within the screen to provide the filter. It was trying to set a default for that parameter which was causing me problems.

 

I’m actually quite impressed by LightSwitch; it has enabled me to build this Inventory application in less than a week’s worth of time. There have been some quirks to work around, and you’ll want to be sure that you have a final decision on what type of data source you’re using (SQL vs RIA Service in my case) before proceeding with lots of work since you can’t change the screen’s data source later.

I bring this up because I started using two SQL data sources which are separate databases but have a join. LightSwitch makes this a ‘virtual join’ and thus doesn’t allow you to make queries and filters on the second data source. I’ve since converted to a RIA Service and am now redeveloping the application.

 

 

 

 

 

Microsoft Virtual Academy (and Cloud Leader contest)

Earlier today I received an email from TechNet Flash with an interesting mention of what it called the “Cloud Leader Contest“.

The premise is to get the word out about Server 2012, System Center 2012, and the corresponding private cloud features of those software. The benefit is an early bird prize of a laptop, tablet and phone (ending November 30th) and a grand prize of $5000.

If you’re reading this and its not November 30th yet, don’t register because I want a better chance to win! Joking aside, the contest ends in July 2013 so you have lots of time to build up those entries.

Not my image, from Microsoft site
I wish I could look this good standing in my server room

After I registered and perused the entry methods, I discovered the second section where you can complete courses on the Microsoft Virtual Academy (MVA) for additional entries.

To be honest, I had never heard of MVA before, and at first glance I assumed it was very sales-oriented with lots of buzzwords. As I dug deeper into one of the courses and the PDF manual and whitepapers, I discovered a wealth of technical information was available on the new features of Server 2012.

I’m sure that I could have discovered this information browsing through TechNet but having it all accessible in one easy area, with progress tracking too is very attractive.

 

After completed one course and a couple of the module exams, I was surprised to find that my profile was listed #209th in my country (Canada), despite having only accumulated 54 points. Perhaps I’m not the only one who hasn’t heard of MVA before.

I can definitely say that as I consider the new MSCA and MSCE certificates for Server 2012 the MVA will be a valuable starting point towards that goal.