Microsoft Virtual Academy (and Cloud Leader contest)

Earlier today I received an email from TechNet Flash with an interesting mention of what it called the “Cloud Leader Contest“.

The premise is to get the word out about Server 2012, System Center 2012, and the corresponding private cloud features of those software. The benefit is an early bird prize of a laptop, tablet and phone (ending November 30th) and a grand prize of $5000.

If you’re reading this and its not November 30th yet, don’t register because I want a better chance to win! Joking aside, the contest ends in July 2013 so you have lots of time to build up those entries.

Not my image, from Microsoft site
I wish I could look this good standing in my server room

After I registered and perused the entry methods, I discovered the second section where you can complete courses on the Microsoft Virtual Academy (MVA) for additional entries.

To be honest, I had never heard of MVA before, and at first glance I assumed it was very sales-oriented with lots of buzzwords. As I dug deeper into one of the courses and the PDF manual and whitepapers, I discovered a wealth of technical information was available on the new features of Server 2012.

I’m sure that I could have discovered this information browsing through TechNet but having it all accessible in one easy area, with progress tracking too is very attractive.

 

After completed one course and a couple of the module exams, I was surprised to find that my profile was listed #209th in my country (Canada), despite having only accumulated 54 points. Perhaps I’m not the only one who hasn’t heard of MVA before.

I can definitely say that as I consider the new MSCA and MSCE certificates for Server 2012 the MVA will be a valuable starting point towards that goal.

 

 

 

Server 2012 Storage Spaces and Hot Spares

I had previously blogged about my SC847 backup storage array, and how I’m contemplating using Windows Server 2012 Storage Spaces to manage the storage in a redundant way.

Yesterday I began setting that up, and it was very easy to configure. My only complaint through the process is that there isn’t much information on what the implications of your choices are (such as Simple, Mirror or Parity virtual disks) when you’re actually making that choice.

Since this storage was just being set up, I decided to familiarize myself with the failover functions of the storage spaces.

Throughout this process I set up a storage space and virtual disk, and then removed a hard drive to see what would happen. What I observed was that both the storage space and virtual disk went into an alert state, and the physical disk list showed the removed disk as “Lost Communication”.

I wiped away all of the configurations and recreated, this time with a hot spare. When I performed the same test of pulling a hard drive, I expected the hot spare to immediately recalculate parity on my virtual disk but this didn’t happen.

Right clicking the virtual disk and choosing “repair” did force the virtual disk to utilize the hot spare though.

 

 

While attempting to figure out the intended behavior, I came across this blog post by Baris Eris detailing the hot spare operation in depth. I won’t repeat everything here; instead I highly recommend you read what Baris has written as it is excellent.

One thing I will noted is that I also had to use the powershell command to switch the re-connected disk to a hot spare, but after doing that the red LED light on my SC847 was lit until I power cycled the unit.

The end result for me is that the behavior of the hot spare in storage spaces will work, as long as documentation is in place for staff to understand how it works, and when manual intervention is necessary.

 

Great Big Hyper-V Survey – 2012

If you’re using Hyper-V in any capacity, and you haven’t already taken the “Great Big Hyper-V Survey 2012”, I highly recommend you do so. You can find detailed information about the survey here: http://www.hyper-v.nu/archives/hvredevoort/2012/10/the-great-big-hyper-v-survey-of-2012-has-launched/

 

This survey is put on by 3 MVP’s (Hans Vredevoort, Aidan Finn, and Damian Flynn) and is the second time they’ve run this survey. The goal is to gather feedback and hopefully help shape the next version of Hyper-V.

 

If you aren’t already following the blogs of Hans Vredevoort and Aidan Finn, you can find them below. They’re always full of great information on Hyper-V.

http://www.hyper-v.nu/

http://www.aidanfinn.com/

 

Intranet site not accessible externally by domain member

A strange issue popped up recently with one of my internal sites. To be honest I’m not quite sure what changed as this site has not experienced the problem mentioned in the post title until just recently.

The problem is as follows:

  • A domain-joined computer is within the company LAN, and accesses intranet.company.com without issue.
  • A non-domain joined computer (such as my personal computer) is able to access intranet.company.com externally.
  • The the domain-joined computer travels outside the LAN and is now unable to access intranet.company.com.

 

At first I thought this was a problem with my reverse proxy, but after extensive troubleshooting I had ruled it out. Once I realized domain membership was a factor in connectivity, I knew the network firewall wasn’t the issue either. I suspected it had something to do with Internet Explorer’s categorization and rules around Internet/Intranet/Trusted Sites.

 

Eventually I stumbled upon this serverfault article which lead me to the solution. I needed to use the AdsUtil.vbs script to set the authentication on the affected directory to “NTLM” instead of the default “Negotiate,NTLM”. As the page mentions, I am using IE8 and IIS 6.

 

To use that adsutil.vbs, I did the following:

Opened a command prompt, and navigated to:

C:\Inetpub\AdminScripts

Then I opened IIS and took note of the site ID for the affected site:

 

 

Then I checked on the authentication value with my affected site ID inserted into the command:

cscript adsutil.vbs GET W3SVC/14548430/Root/NTAuthenticationProviders

And after verifying it was the default, I changed it:

cscript adsutil.vbs SET W3SVC/14548430/Root/NTAuthenticationProviders "NTLM"

After this, my domain-joined computers were accessing it properly once again.

Fallability – physically and at work

I haven’t made a post in a few weeks, which is unfortunate. However my excuse this time is a good one, as I broke my right clavicle playing Ultimate Frisbee. It was a failed dive into the end zone, and then a trip directly to the hospital for x-rays. Due to that, life at home and work (once I went back) has been busy to say the least.

Since I’ve been back, I’ve been busy setting up my new backup storage device, based on the SuperMicro SC847 chassis. My plan was to use 18 x 3.0 TB drives in a RAIDZ2 array provided by FreeNAS. This would then be presented to my Backup Exec 2012 server through iSCSI to be used as a deduplication storage device.

During my testing before the purchase everything worked well and seems like a great idea. Even after I received the parts and built the unit, my testing was showing that this was a good solution.

However, the second goal of this unit was to place it across the parking lot in a secondary ‘off-site’ building, connected by two 802.11ac wireless devices. Once this move was completed, performance on that wireless network was shown to not be any better than my existing 802.11a network over the same stretch, and then a drive in the FreeNAS array failed.

Its not a mystery that I’m not a Unix/FreeBSD guru, and the use of ZFS is completely new to me. Because of that I had difficulty troubleshooting and performing the proper steps to take the array out of degraded state (there’s a bit more detail there, it wasn’t simply a failed drive, but its not entirely relevant to this post).

At this point I had to take a step back, and admit that while FreeNAS and ZFS was a good idea in theory, with my department’s experience and standardization on Windows, it was a mistake to implement. There’s a large amount of risk in having a backup solution that isn’t easy to maintain and fix, especially when the person who set it up is out for a week (with a broken bone or something, one of those rare events that never happen right?)

I’m not entirely sure what I’m going to replace FreeNAS with; perhaps utilize the SAS HBA capabilities to build the array with Windows on top, or perhaps Windows Server 2012 storage pools/spaces.

What I’m actually getting at is that its ok to be wrong;  its ok to have make a mistake and need to re-design a solution. One person, or even a team of people won’t design perfect infrastructure every time. The important part is to humbly admit it, and then improve. Make something better, and do a lessons-learned/post-mortem to ensure that the right questions are asked the next time a project comes up on the radar.