Add Maintenance Configuration to Azure Dedicated Host issue

When using Azure Dedicated Hosts, Microsoft offers a way to have more control over maintenance events that occur on the host. This is done through “Maintenance Control” configuration objects.

I’m attempting to test this, and have been receiving failures that I need to investigate.

To start, I created a host group and a host in the Azure Portal. Then I created a maintenance configuration.

Now when I go into the Maintenance Configuration and try to assign my Host, the activity fails:

 

After a few minutes, this shows up in the Hosts activity log:

 

With this detail:

 

I have two assumptions to test at this point:

  1. Maybe something is broken in the portal, because the error indicates it is trying to create an assignment with “dummyname” which doesn’t look right
  2. Perhaps a VM must be assigned to the Dedicated Host before a Maintenance Configuration can be assigned to it

I wanted to test #1 before #2 to reduce the number of variables, and I used Az PowerShell to do so.

First, I Get-AzMaintenanceConfiguration:

Then I create an assignment, using PowerShell splatting:

$AssignmentParams = @{
	ResourceGroupName 			= $resourcegroupname
	ResourceParentType 			= "hostGroups"
	ResourceParentName 			= "eastus2-hostgroup"
	ResourceType 				= "hosts"
	ResourceName 				= "eastus2-host1"
	ProviderName				= "Microsoft.Compute"
	ConfigurationAssignmentName             = "$($config.name)-host1"
	MaintenanceConfigurationId 	        = $config.Id
	Location 				= $config.location
}
 
New-AzConfigurationAssignment @AssignmentParams

Surprisingly this seemed to work, with no errors output. I say surprisingly, because I’m sure I tested this with errors last week before I decided to sit down and write a blog post about it!

Performing a Get-AzConfigurationAssignment with a smaller set of parameters in a hashtable returns the assignment that I expect to see:

$GetAssignmentParams = @{
	ResourceGroupName 			= $resourcegroupname
	ResourceParentType 			= "hostGroups"
	ResourceParentName 			= "eastus2-hostgroup"
	ResourceType 				= "hosts"
	ResourceName 				= "eastus2-host1"
	ProviderName				= "Microsoft.Compute"
}
Get-AzConfigurationAssignment @GetAssignmentParams

 

 

When I look in the Portal, I can see the assignment on the Host:

 

And it also appears when looking at the Maintenance Configuration too:

 

Azure Availability Zones and latency testing

While working on a design for virtual machine placement in Azure, I got to wondering about specifics of Availability Zones and the potential performance impacts of not actually choosing one. My findings below are a little bit conjecture at this point, not having found direct confirmation from Microsoft on the topic.

Availability Zones are a method within Azure to provide resiliency for resources by using multiple datacenters within a region.

Resources within Azure can be one of 3 types related to these zones:

  • Zonal services – where a resource is pinned to a specific zone (for example, virtual machines, managed disks, Standard IP addresses), or
  • Zone-redundant services – when the Azure platform replicates automatically across zones (for example, zone-redundant storage, SQL Database).
  • None – not actually documented (yet?) but this is the type when you have a Zonal service but do not select a zone.

The last item there is of particular interest – if you don’t select a zone for a Zonal service, where does it go? This issue from Microsoft Docs has a description of an “allocator” that works behind the scenes to make a decision on zone placement, but that is never surfaces to you; not even available in the Azure Resource Explorer.

For example, here’s a snipped of the metadata available for a VM with a specific Zone placement:

And here’s one without any at all:

This led to some questions for me:

  1. Am I losing performance (higher latency) by not setting my VMs in the same zone (if they happen to be placed in separate zones by the “allocator”)?
  2. Will I be charged for bandwidth between zones when billing begins on July 1, 2021 for it, if my VMs don’t have a zone selected but get placed in separate zones?

I’ve asked #2 in an Issue on the doc, and hopefully will receive an answer. I set out to test #1 within EastUS2.

Starting with Microsoft’s recommendation for latency testing on a virtual network, I downloaded the “latte.exe” tool and spun up some VMs.

The advantage of this tool, according to Microsoft, is:

latte.exe (for Windows) can isolate and measure network latency while excluding other types of latency, such as application latency.

Other common connectivity tools, such as Ping … employ the Internet Control Message Protocol (ICMP), which can be treated differently from application traffic and whose results might not apply to workloads that use TCP and UDP.

The output of this tool looks like this, and it is the Latency value we’re after:

While running multiple tests on idle VMs, I found a discrepancy of ~20-30 us between tests, so take that into account when viewing the results below.

Here’s some of the results that I found:

Test Result (us)
2 VMs, same availability zone, accelerated networking is false: 340
2 VMs, same availability zone, accelerated networking is true: 169
2 VMs, different availability zone, accelerated networking is false: 397
2 VMs, different availability zone, accelerated networking is true: 150
2 VMs, no availability zone selected, accelerated networking is false: 427
2 VMs, no availability zone selected, accelerated networking is true: 144
2 VMs, same availability zone, accelerated networking is true, proximity placement group aligned: 158

 

It doesn’t seem right, but the conclusion that I draw from this is that the latency between availability zones (at least in EastUS2) is functionally equivalent to within a zone, and even within a proximity placement group, which is supposed to improve even more.

I don’t have a good explanation for these results yet – perhaps my testing is flawed in some way, or perhaps this is specific to EastUS2 and the differences are more varied in other Regions where the datacenters are further apart, or consist of more datacenters within each zone itself.

 

 

 

VMM stuck logical network on NIC

I ran into an issue while setting up a new Hyper-V server in System Center VMM yesterday. I’m using a Switch Independent team on the server, and while I configured it on the host first, I started going down the path of setting up the networking using VMM components, like uplink port profiles and logical switches. At one point I decided to revert back to my original configuration, but I found that my new logical switch had a dependency on my host, despite removing all visible configuration.

The only thing odd about this host that I could see was that on the NIC used for the Hyper-V virtual switch, it was pinned to a logical network and greyed out; I couldn’t remove it:

This is a server in a cluster, and the second host didn’t exhibit the same problem. I decided to check off a logical network on a different NIC, and then hit the “View Script” button to see the PowerShell that VMM generated, to try and reverse engineer what was happening.

The PowerShell used the cmdlets “Get-SCVMHost”, “Get-SCVMHostNetworkAdapter”, and “Set-SCVMHostNetworkAdapter”. After following those through, I ended up with a command to remove the logical network from my NIC:

Set-SCVMHostNetworkAdapter -VMHostNetworkAdapter $vmhostnetworkadapter -RemoveLogicalNetwork $logicalNetwork

However, this produced an error:

Set-SCVMHostNetworkAdapter : The selected host adapter ‘Intel(R) Ethernet 10G 4P X520/I350 rNDC #2$$$Microsoft:{F17CF86F-A125-4EE7-9DB3-0777D9935BA4}’ has an uplink
port profile set configured with network sites, so logical networks, IP subnets, or VLANs cannot be directly modified on the host network adapter. (Error ID: 25234)

When I viewed the dependency on the Uplink Port Profile, it displayed my server name; but I couldn’t see this uplink port profile anywhere in the GUI for the server.

Reviewing the docs on the “Set-SCVMHostNetworkAdapter” led me to another switch: RemoveUplinkPortProfileSet

Here’s the full PowerShell I used to remove this, which allowed me to remove my logical switch and uplink port profile.

$vmHost = Get-SCVMHost -Computername "hostname"
#Get-SCVMHostNetworkAdapter -VMHost $vmHost | select name, connectionName # Find the NIC by connectionName, so I can use the real name in the next command
$vmHostNetworkAdapter = Get-SCVMHostNetworkAdapter -VMHost $vmHost -name "Intel(R) Ethernet 10G 2P X520 Adapter"
$logicalNetwork = Get-SCLogicalNetwork -Name "logical network name"
Set-SCVMHostNetworkAdapter -VMHostNetworkAdapter $vmhostnetworkadapter -RemoveUplinkPortProfileSet

 

Migrate Azure Managed Disk between regions

This post is a reference for needing to move an Azure Managed Disk between regions. It is based on this Microsoft Docs article.

There are legacy posts containing information to use Az PowerShell to basically do a disk export to blob storage (with a vhd file) and then import that.

This procedure skips those intermediate steps, and uses AzCopy to do it directly.

If there’s no active data changing on the disk, you could consider taking a snapshot and then producing a new Managed Disk from your snapshot to perform the steps below – this way you wouldn’t need to turn off the VM using the Disk. However, the situations where this might be viable are probably rare.

You will need to:

First, shut down and deallocate your VM.

Then open a PowerShell terminal and connect to your Azure subscription.

Then populate this script, and execute each command in sequence.

# Name of the Managed Disk you are starting with
$sourceDiskName = "testweb1_c"
# Name of the resource group the source disk resides in
$sourceRG = "test-centralus-rg"
# Name you want the destination disk to have
$targetDiskName = "testweb1_c"
# Name of the resource group to create the destination disk in
$targetRG = "test-eastus2-rg"
# Azure region the target disk will be in
$targetLocate = "EastUS2"
 
# Gather properties of the source disk
$sourceDisk = Get-AzDisk -ResourceGroupName $sourceRG -DiskName $sourceDiskName
 
# Create the target disk config, adding the sizeInBytes with the 512 offset, and the -Upload flag
$targetDiskconfig = New-AzDiskConfig -SkuName 'Premium_LRS' -UploadSizeInBytes $($sourceDisk.DiskSizeBytes+512) -Location $targetLocate -CreateOption 'Upload'
 
# Create the target disk (empty)
$targetDisk = New-AzDisk -ResourceGroupName $targetRG -DiskName $targetDiskName -Disk $targetDiskconfig
 
# Get a SAS token for the source disk, so that AzCopy can read it
$sourceDiskSas = Grant-AzDiskAccess -ResourceGroupName $sourceRG -DiskName $sourceDiskName -DurationInSecond 86400 -Access 'Read'
 
# Get a SAS token for the target disk, so that AzCopy can write to it
$targetDiskSas = Grant-AzDiskAccess -ResourceGroupName $targetRG -DiskName $targetDiskName -DurationInSecond 86400 -Access 'Write'
 
# Begin the copy!
.\azcopy copy $sourceDiskSas.AccessSAS $targetDiskSas.AccessSAS --blob-type PageBlob
 
# Revoke the SAS so that the disk can be used by a VM
Revoke-AzDiskAccess -ResourceGroupName $sourceRG -DiskName $sourceDiskName
 
# Revoke the SAS so that the disk can be used by a VM
Revoke-AzDiskAccess -ResourceGroupName $targetRG -DiskName $targetDiskName

 

When you get to the AzCopy step, you should see results something like this:

In my experience, the transfer will go as fast as the slowest rated speed for your managed disk – the screenshot above was from a Premium P15 disk (256 GB) rated at 125 MBps (or ~ 1 Gbps).

 

Azure WAF Policy and Application Gateway limitation

Today I encountered a concerning product limitation of the Azure Application Gateway and Web Application Firewall (WAF) Policies.

Some background first – when working with an Application Gateway v2 sku, you can apply a WAF in 2 different ways:

Microsoft’s documentation appears to be updated to display a preference for using the WAF Policy object, including a scripted method for converting to it: Migrate Web Application Firewall policies using Azure PowerShell

I moved to using a WAF Policy because I wanted to use a series of Terraform local variables to supply WAF rule configuration (exclusions) to both Application Gateway and Azure FrontDoor WAF Policies, without duplicating code.

But there is a severe limitation that you may not have noticed in the docs:

You might think, “that’s not a problem, I shouldn’t ever need to disassociate my application gateway” but here’s where it gets wild.

Let’s say that you decide you under-sized your Application Gateway, and want to increase the maximum scale units, or set it to auto-scale. You go to the Configuration blade, and modify the setting, and then hit “save”. You will see this error:

Uh-oh.

You can try to disassociate the policy in the Portal, but you’ll just see this:

What if we try to disassociate the Policy in PowerShell?

"Firewall policy cannot be removed from Application Gateway, changing from one firewall policy to another is permitted."

This is the limitation – once you’ve applied a WAF Policy, the only way to make a configuration change against the Application Gateway is to destroy it and re-create it. This is absolutely crazy, and means I will not deploy another WAF Policy object until it is resolved. Generally speaking one wouldn’t expect to be changing the AppGw configuration often, but being stuck like this not a good place to be in.

Other’s have talked about this like on ServerFault with a suggestion to shut down the AppGw (not effective) or this issue on azure-cli describing the limitation. To date I haven’t found any viable workaround.