Azure Site Recovery and Backups

I’m working on a specific test case of Azure Site Recovery and came across an error, which identified a gap in my knowledge of ASR and Hyper-V Replica.

I have ASR configured to replicate a group of VMs at 5 minute intervals. My initial replication policy for this proof-of-concept was configured to hold recovery points for 2 hours, with app-consistent snapshots every 1 hour.

In practice, what I have seen for potential select-able recovery points is one every 5 minutes going back to the latest application-consistent recovery point, and then any additional app-consistent recovery points within the retention time set (2 hours):

Because the underlying mechanism is Hyper-V Replica, this corresponds to the options for Recovery Points visible in Hyper-V Manager:

Hyper-V will perform the .HRL file replication to Azure every 5 minutes as configured, but it will also utilize the Hyper-V integration components to trigger in-guest VSS for the application-aware snapshot at 1 hour intervals. This means the RPO in general is up to 5 minutes, but for application-aware RPO it is 1 hour.

 

In addition to replication, I am backing up a VM with Quest Rapid Recovery. The test was to ensure that both protection methods (Disaster Recovery and Backup) do not conflict with each other. Rapid Recovery is running an incremental snapshot every 20 minutes, and on about 40% of them the following events are received in the Application Log for the VSS service:

Volume Shadow Copy Service error: The I/O writes cannot be flushed during the shadow copy creation period

Volume Shadow Copy Service error: Unexpected error DeviceIoControl(\\?\Volume{a9dca4cb). hr = 0x80070016, The device does not recognize the command.

 

Quest has a KB article about this issue, which says to disable the Hyper-V integration component for backup in order to avoid a timing conflict when the host uses the Volume Shadow Copy requestor service. The problem is, disabling this prevents ASR from getting an application-aware snapshot of the virtual machine, which it will begin to throw warnings about after a few missed intervals:

These problems make sense though – for every hourly attempt of Hyper-V to take an application-aware snapshot using VSS, Rapid Recovery finds that writer in use and times-out waiting for it. There isn’t a way to configure when in an hour Hyper-V takes the snapshot, but I’ve begun tweaking my Rapid Recovery schedule to not occur on rounded intervals like :00 or :10, but rather :03 or :23 in an attempt to avoid conflicts with the VSS timing. So far this hasn’t been as effective as I’d hoped.

The other alternative is to disable application-aware snapshots if they’re not needed. If it is just flat files or an application that doesn’t natively tie into VSS, the best you can expect is a crash-consistent snapshot and you should configure your ASR replication policy accordingly, by setting that value to 0. In this manner you can still retain multiple hours of recovery points, they’ll just ALL be crash-consistent.

 

SonicWall Preempt Secondary Gateway

This is something fairly simple and obvious, but wanted to note it down anyways.

I wanted to use the SonicWall site-to-site VPN feature called “Preempt Secondary Gateway” found on the Advanced tab of VPN properties:

This is effectively VPN failback -if your primary goes down and then returns to service, the VPN will have been established on the secondary gateway and won’t renegotiate automatically back to the primary until the IKE lifetime expires. This can be a disadvantage in cases where the secondary gateway is a sub-par link or has metered bandwidth on it.

You will want to be careful with this setting however, if your primary has returned to service but isn’t stable – it could enable a renegotiation loop of your tunnel that would impact is availability.

 

I noticed on some VPNs this option was missing:

 

This is because a secondary gateway wasn’t specified; as soon as you define anything within that space (even 0.0.0.1) the option dynamically appears on the Advanced tab.

Azure Site Recovery setup errors

While setting up an Azure Site Recovery proof of concept, errors were encountered; at first with associating the replication policy and then afterwards with updating the authentication service.

The background is connecting SCVMM with a Server 2012 R2 Hyper-V Cluster to replicate to Azure. During the final steps of the “Prepare Infrastructure” phase, you need to associate a replication policy. This failed at the following step:

The text of the error was:

Error ID
10003
Error Message
Protection couldn't be configured for cloud/site POC-ASR.
Provider error
Provider error code: 31408

Provider error message:

	Failed to fetch the version of Microsoft Azure Recovery Services Agent installed on the Hyper-V host server . Error: An internal error has occurred trying to contact the  server: : .

WinRM: URL: [http://:5985], Verb: [INVOKE], Method: [GetStringValue], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/cimv2/StdRegProv]

Check that WS-Management service is installed and running on server .

Provider error possible causes:
	It is possible that Registry provider of WMI is corrupted.

Provider error recommended action:
	Build the repository using MOF compiler and retry the operation.

This occurred right before I was distracted by other items so I didn’t directly troubleshoot. When I came back to the Azure Portal (in a fresh session) I had a surprising new message greeting me at the Recovery Services Vault blade:

This was very odd, since I had just installed the latest version of the Site Recovery provider on my VMM host, as well as the MARS agent on my Hyper-V hosts. But when I clicked “Update Now” it listed my VMM host and displayed a new button to “Update Authentication Service”.

This almost immediately error-ed out:

Error ID
635
Error Message
Updating authentication service information for server -  failed.
Provider error
Provider error code: 31437

Provider error message:

	Failed to fetch the version of Microsoft Azure Site Recovery Agent installed on the Hyper-V host(s) '' as the host is not reachable.

Provider error possible causes:
	
      1. Windows Management Instrumentation service crashed.
      2. Windows Remote Management (WinRM) service is not running.
      3. Required services may not be running on the Hyper-V host(s)''.
  
Provider error recommended action:
	
      Ensure that
      1. A firewall is not blocking HTTPS/HTTPS traffic on the Hyper-V host.
      2. If the server is running windows Server 2008 R2, ensure that KB 982293 is installed on it. Refer to https://aka.ms/kblink982293 for more details.
      3. The Hyper-V Virtual Machine Management service is running.
      4. Ensure that the Windows Management Instrumentation service is running on the Hyper-V host(s).
      5. Ensure that the Windows Remote Management (WinRM) service is running on the Hyper-V host(s).
      6. Verify that CredSSP authentication is enabled on the service configuration of the Hyper-V host(s). To enable the CredSSP on the service configuration, run the following command on the Hyper-V host, from an elevated command line: winrm set winrm/config/service/auth @{CredSSP="true"}.
      7. The Provider version running on the server is up-to-date. Download and install the latest Microsoft Azure Site Recovery Provider.
      8. If the error persists, retry the operation and contact support.
    

I validated all the components in the list here, checked the referenced articles, ensured WMF was updated to 5.1, to no avail.

I finally stumbled upon this post on the Microsoft forums where a check was done against WMI for the object “StdRegProv”, which is mentioned in the original error from the replication policy. Turns out this was my problem too! When I ran the WMI query it returned an error of “Exception calling “GetStringValue” : “Provider not found “” on 3 of my 4 Hyper-V hosts:

$hklm = 2147483650
$key = "Software\Microsoft\Windows\CurrentVersion\Uninstall\Windows Azure Backup"
$value = "DisplayVersion"
$wmi = get-wmiobject -list "StdRegProv" -namespace root\cimv2
($wmi.GetStringValue($hklm,$key,$value)).svalue

I ran the mofcomp command, and then when I ran the last line of the previous query ($wmi.GetStringValue) it returned a value instead of an error.

cd c:\windows\system32\wbem
mofcomp regevent.mof

Following this, the “Update Authentication Service” job completed successfully, and I was able to associate my replication policy without further problems.

 

Barracuda Tunnel to Sonicwall going down

A few months ago I solved a problem with a site-to-site VPN tunnel between a SonicWall NSA appliance and an Azure VM running Barracuda NextGen Firewall.

This VPN tunnel would go down once every 2-3 weeks, and only be restored if I manually re-initiated it from the Barracuda side.

I had intended on saving a draft of this post with details on the error I found in the Barracuda log, but apparently failed to do so.

So the short answer is make sure that “Enable Keepalives” is turned on, on the SonicWall side of the tunnel. This has brought stability to the VPN long-term.

Get Inner Error from Azure RM command

Today I’m working on an ARM Template to deploy some resources into an Azure subscription. After building my JSON files and prepping parameters, I used the cmdlet “Test-AzureRmResourceGroupDeployment” in order to validate my template.

This failed with the error:

Code    : InvalidTemplateDeployment
Message : The template deployment 'e76887a9' is not valid according to the validation
          procedure. The tracking id is '6df0fffb'. See inner errors for details. Please
          see https://aka.ms/arm-deploy for usage details.
Details : {Microsoft.Azure.Commands.ResourceManager.Cmdlets.SdkModels.PSResourceManagerError}

I found that decidedly unhelpful, but found an effective way to determine the actual error message.

To retrieve the error details, use the following cmdlet, where the CorrelationID equals the tracking ID mentioned in the error.

get-azurermlog -correlationid 6df0fffb -detailedoutput

This will produce output which you can investigate and determine where the error lies.

In my case, I needed to create a Core quota increase request with Azure support, as my subscription had reached it’s limit.