Pass multi value parameters to Azure Runbook

I learned something recently about calling an Azure Automation Runbook through PowerShell, particularly around how to pass multi-value entries in a single parameter.

The original concept was that I have a runbook which takes input parameters of an Azure resource group, network security group, and DNS name. It takes this information, resolves the DNS name to an IP address, and then creates a rule inside the network security group for it.

Here’s a simplified example of what I started with:

param(
  [parameter(Mandatory=$false)]
  [string]$nsggroup = "*",
  [parameter(Mandatory=$false)]
  [string]$rulename = "source_dest_port",
  [parameter(Mandatory=$false)]
  [string]$rgname = "*-srv-rg",
  [parameter(Mandatory=$false)]
  [string]$endpointfqdn = "endpoint.vendor.com"
)
$dnstoip = [System.Net.Dns]::GetHostAddresses("$endpointfqdn")
$subscriptions = Get-AzSubscription
 
$nsgs = Get-AzNetworkSecurityGroup -ResourceGroupName "$rgname" -Name "$nsggroup"
ForEach($nsg in $nsgs) {
    $nsgname = $nsg.name
    $nsg | Add-AzNetworkSecurityRuleConfig -Access Allow -DestinationAddressPrefix $dnstoip.IPAddressToString -DestinationPortRange 443 -Direction Outbound -name $rulename -Priority $priority -SourceAddressPrefix * -SourcePortRange * -Protocol * | Set-AzNetworkSecurityGroup | out-null
}

The problem with this is once it took it to a production environment, I needed more flexibility on where the rule got applied, without having to create many instances of the scheduled runbook linked to parameters.

The key was that I wanted to pass in a list of network security group names, and have the runbook do a ForEach on that list.

The primary change in parameters was to make $nsggroup an [array] type like this:

[array]$nsggroup
Now, if calling the runbook manually in the Portal GUI, I would populate that parameter with this syntax:
['nsg1','nsg2']

This syntax and learning originally came from this doc link which states:

  • If a runbook parameter takes an [array] or [object] type, then these must be passed in JSON format in the start runbook dialog.  For example:
  • A parameter of type [object] that expects a property bag can be passed in the UI with a JSON string formatted like this: {“StringParam”:”Joe”,”IntParam”:42,”BoolParam”:true}.
  • A parameter of type [array] can be input with a JSON string formatted like this: [“Joe”,42,true]

 

In order to programmatically set up the runbook along with linked schedules, I put my input into an array containing PowerShell objects like this (note the “nsggroup” value on each object):

$inputrules= @(
    [pscustomobject]@{  name = "dnsresolve1"; rulename = "source_dest_port"; ports = @(443); endpointfqdn = "endpoint.vendor.com"; nsggroup = @('nsg1', 'nsg2', 'nsg5', 'nsg6') },
    [ps[pscustomobject]@{  name = "dnsresolve2"; rulename = "source2_dest_port"; ports = @(443); endpointfqdn = "endpoint2.vendor.com"; nsggroup = @('nsg3', 'nsg4') }
)

In my script that does the programmatic setup, I use this array of objects to do something like this (vastly simplified):

foreach ($object in $inputrules) {
Register-AzAutomationScheduledRunbook -Name $runbookname -ScheduleName "6Hours-$($object.name)" -AutomationAccountName $automationAccountName -ResourceGroupName $resourceGroupName `
        -Parameters @{"nsggroup" = $object.nsggroup; "rulename" = $object.rulename; "endpointfqdn" = $object.endpointfqdn; "ports" = $object.ports; "subscriptionid" = $clientsubscriptionid }
}

You can see by using the syntax in my custom object of an array, I can pass it into the scheduled runbook as a simple parameter, and Azure Automation will take that and apply it successfully.

 

Terraform refactor to modules deletes resources

I’ve finally got a use-case for using Modules in Terraform, and so I’m beginning to dip into testing with them. I’ve got a bunch of Azure resources (VNET, subnet, NSG, etc) that already exist in my Terraform config, and I’m looking to basically duplicate them for a disaster recovery purpose.

In reality, I don’t want to duplicate the Terraform config, because if it ever changes or improves, it would not be efficient to track those changes in multiple spots. So instead I can move the resources that build my VNET into a module, add some variables, and then call the module twice from my main Terraform config – just need to pass in different values for the variables.

module "vnet" {
  source = "./network"
  env = "test1"
  location = "CanadaCentral"
}
 
module "dr-vnet" {
  source = "./network"
  env = "test2"
  location = "CanadaEast"
}

While reading up on modules though, I noticed a note in the docs:

When refactoring an existing configuration to introduce modules, moving resource blocks between modules causes Terraform to see the new location as an entirely separate resource to the old. Always check the execution plan after performing such actions to ensure that no resources are surprisingly deleted.

Yikes! Even for a simple implementation of a VNET, deleting it means deleting a bunch of dependent resources like virtual machines.

However, there does appear to be a (very manual) way to work around this problem when refactoring into Modules: use the terraform mv command to move location of a resource in the state file.

This relies upon the syntax of referencing resources beginning with module.”module name”. For example, lets say I have this original resource:

resource "azurerm_resource_group" "test-rg" {
  name     = "test-rg"
  location = "CanadaCentral"
}

Assuming I call the module just like my example above, I would use the following terraform mv command:

terraform state mv azurerm_resource_group.test-rg module.vnet.azurerm_resource_group.test-rg

Note the sytax of my destination, being module.”module name”.resourcetype.”resource name”

The downside is that I would need to do this for each original resource that is moving into a Module – perhaps a small price to pay in order to have better managed code, and at least I don’t need to go and find the Azure resource ID like you do with the terraform import command.

This resource group is actually a bad example, because terraform apply will fail when it tries to create two different resource groups with the same name in the same subscription. This could be avoided if you pass in an alternate AzureRM Provider to your Module, using a different Azure subscription.

 

SCVMM – Change VM NIC from Static to Dynamic IP

Just a quick post to document how to make a change with System Center Virtual Machine Manager.

I have a VM with a NIC attached to a VM Network that has a Static IP Pool. For a few not-so-great reasons, I need to remove it from that pool back to a Dynamic IP address.

In the GUI, this option is disabled:

Here’s how to accomplish it with PowerShell:

# Get the virtual machine into an object
$vm = Get-SCVirtualMachine -Name "VMNAME"
 
# Find the destination VM Network you want to change to (the one without the Static IP Pool)
$vmnetwork = Get-SCVMNetwork -name "VMNETWORKNAME"
 
# Find the VM Subnet on the VM Network
$vmsubnet = Get-SCVMSubnet -VMNetwork $vmnetwork #Assume single subnet
 
# Using the pipe to find the NIC (assume single NIC), modify the properties to Dynamic
$vm | Get-SCVirtualNetworkAdapter | Set-SCVirtualNetworkAdapter -IPv4AddressType Dynamic -vmsubnet $vmsubnet

Use Azure Function to start VMs

A use-case came up recently to provide a little bit of self-service to some users with an occasionally used Virtual Machine in Azure. I have configured Auto-Shutdown on the VM, but want an ability for users to turn the VM on at their convenience.

Traditionally, this would be done by using Role-Based Access Control (RBAC) over the resource group or VM to allow users to enter portal.azure.com and start the VM with the GUI.

However I wanted to streamline this process without having to manage individual permissions, due to the low-risk of the resource. To do so, I’m using an Azure Function (v2. PowerShell) to start all the VMs in a resource group.

 

First create your function app (Microsoft Docs link) as a PowerShell app – this is still in preview as a Function V2 stack, but it is effective.

The next thing I did was create a Managed Identity in my directory for this Function app. I wanted to ensure that the code the Function runs is able to communicate with the Azure Resource Manager, but did not want to create and manage a dedicated Service Principal.

Within the Function App Platform Features section, I created a Managed Identity for it to authenticate against my directory to access resources:

Go to “Identity”:

Switch “System Assigned” to ON and click Save:

With the Managed Identity now created, you can go to your Subscription or Resource Group, and add a Role assignment under “Access control (IAM)”:

Lastly, I developed the following code to place into the function (github gist):

using namespace System.Net
 
# Input bindings are passed in via param block.
param($Request, $TriggerMetadata)
 
# Interact with query parameters or the body of the request.
$rgname = $Request.Query.resourcegroup
if (-not $rgname) {
    $rgname = $Request.Body.resourcegroup
}
$action = $Request.Query.action
if (-not $action) {
    $action = $Request.Body.action
}
$subscriptionid = $Request.Query.subscriptionid
if (-not $subscriptionid) {
    $subscriptionid = $Request.Body.subscriptionid
}
$tenantid = $Request.Query.tenantid
if (-not $tenantid) {
    $tenantid = $Request.Body.tenantid
}
 
#Proceed if all request body parameters are found
if ($rgname -and $action -and $subscriptionid -and $tenantid) {
    $status = [HttpStatusCode]::OK
    Select-AzSubscription -SubscriptionID $subid -TenantID $tenantid
    if ($action -ceq "get"){
        $body = Get-AzVM -ResourceGroupName $rgname -status | select-object Name,PowerState
    }
    if ($action -ceq "start"){
        $body = $action
        $body = Get-AzVM -ResourceGroupName $rgname | Start-AzVM
    }
}
else {
    $status = [HttpStatusCode]::BadRequest
    $body = "Please pass a name on the query string or in the request body."
}
 
# Associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
    StatusCode = $status
    Body = $body
})

To provide secure access, I left my Function app with anonymous authentication, but added a new Function Key which I could use and control when calling this function. This is found under the “Manage” options of the function:

To test, I called the function externally like this, passing in my request body parameters and using the Function Key that was generated. You can grab the URL for your function right near the top “Save” button when you’re editing it in the Portal.

$Body = @"
{
    "resourcegroup": "source-rg",
    "action": "start",
    "subscriptionid": "SUBID",
    "tenantid": "TENANTID"
}
"@
$URI = "https://hostname.azurewebsites.net/api/startVMs?code=FUNCTIONKEY"
Invoke-RestMethod -Uri $URI -Method Post -body $body

 

If I run this with the “get” action, then the Function will return the status of each VM in the resource group:

Azure Site Recovery learnings – Azure-To-Azure

A few things I’ve learned about Azure Site Recovery (ASR) recently, while doing an Azure-to-Azure DR design – some quite surprising:

  • Both your Recovery Services Vault AND it’s resource group must be located in a different region than the source
  • You need a cache storage account in the source region, however the staged incremental data will have negligible cost according to Microsoft
  • For source VMs utilizing Managed Disks, ASR will create destination Managed Disks and you will be charged for provisioned storage size, not consumed size. This differs from using unmanaged disks or for on-premises ASR, where consumed size is stored as page blobs in the destination storage account. I can’t find a Microsoft Doc link that specifically outlines this.
  • Egress bandwidth compression is estimated at about 60%, according to this blog post: Know exactly how much it will cost for enabling DR to your Azure VMs
  • VM Extensions are not replicated to a failover VM, and need to be manually installed: Doc Link
  • Secondary IP addresses are not replicated! These will need to be re-added through a Post-Failover task: Doc Link

 

Some things I still need to research and test are:

  • What happens if you perform a failover (or test failover) to a VM that is reporting into Log Analytics and Azure Automation (DSC, Update Management)? Will it seamlessly continue these operations, even when the VM extension no longer exists?
  • What happens to Azure Backup? Will the test failover VM using the Azure Backup Agent try to send backup data cross-region to the source Recovery Services Vault if the schedule time is triggered?