Azure Automation and DSC inside Pipeline

For a while I have been integrating Terraform resource deployment of Azure VMs with Azure Desired State Configuration inside of them (previous blog post).

Over time my method for deploying the Azure resources to support the DSC configurations has matured into a PowerShell script that checks and creates per-requisites, however there was still a bunch of manual effort to go through, including the creation of the Automation Run-As account.

This was one of the first things I started building in an Azure DevOps pipeline; it was a good idea having now spent a bunch of time getting it working, and learned a bunch too.

Some assumptions are made here. You have:

  • an Azure DevOps organization to play around with
  • an Azure subscription
  • the capability/authorization to create new service principals in the Azure Active Directory associated with your subscription.

You can find the GitHub repository containing the code here. with the README file containing the information required to use the code and replicate it.

A few of the key considerations that I wanted to include (and where they are solved) were:

  • How do I automatically create a Run-as account for Azure Automation, when it is so simple in the portal (1 click!)
    • Check the New-RunAsAccount.ps1 file, which gets called by the Create-AutomationAccount.ps1 script in a pipeline
  • Azure Automation default modules are quite out of date, and cause problems with using new DSC resources and syntax. Need to update them.
  • I make use of Az PowerShell in runbooks, so I need to add those modules, but Az.Accounts is a pre-req for the others, so it must be handled differently
    • Create-AutomationAccount.ps1 has a section to do this for Az.Accounts, and then a separate function that is called to import any other modules from the gallery that are needed (defined in the parameters file)
  • Want to use DSC composites, but need a mechanism of uploading that DSC module as an Automation Account module automatically
    • A separate pipeline found on “ModuleDeploy-pipeline.yml” is used with tasks to achieve this
  • Don’t repeat parameters between scripts or files – one location where I define them, and re-use them
    • See “dsc_parameters.ps1”, which gets dot-sourced in the scripts which are directly called from pipeline tasks

Most importantly, are the requirements to get started, being:

  • An Azure Subscription in which to deploy resources
  • An Azure KeyVault that will be used to generate certificates
  • An Azure Storage Account with a container, to store composite module zip
  • An Azure DevOps organization you can create pipelines in
  • An Azure Service Principal with the following RBAC or API permissions: (so that it can itself create new service principals)
    • must be “Application Administrator” on the Azure AD tenant
    • must be “Owner” on the subscription
    • must have appropriate rights to an access policy on the KeyVault to generate and retrieve Certificates
    • must be granted the following API permissions within the Azure Active Directory:
  • An Azure DevOps Service Connection linked to the Service Principal above

 

The result is that we have two different pipelines which can do the following:

  • ModuleDeploy-pipeline.yml pipeline runs and
    • takes module from repository and creates a zip file
    • uploads DSC composite module zip to blob storage
    • creates automation account if it doesn’t exist
    • imports DSC composite module to automation account from blob storage (with SAS)
  • azure-pipelines.yml pipeline runs and:
    • creates automation account if it doesn’t exist
    • imports/updates Az.Accounts module
    • imports/updates remaining modules identified in parameters
    • creates new automation runas account (and required service principal) if it doesn’t exist (generating an Azure KeyVault certificate to do so)
    • performs a ‘first-time’ run of the “Update-AutomationAzureModulesForAccount” runbook (because automation account is created with out-of-date default modules)
    • imports DSC configuration
    • compiles DSC configuration against configuration data

 

Terraform handling list of maps

Dropping a quick reference here on some specific use cases for Terraform syntax.

Scenario #1

I want to build an Azure Route Table. It is going to contain 1 or more routes, but those are dependent upon the implementation; one may have 1 or 2, another may have more or even zero.

A normal route table in Terraform would look like this:

resource "azurerm_route_table" "test-routetable" {
  name                = "testroutes"
  location            = var.location
  resource_group_name = var.resourcegroupname
  disable_bgp_route_propagation = false

  route {
    name                    = "clientsubnet_to_nva"
    address_prefix          = var.networkipaddress["clientsubnet"] # This is a map variable
    next_hop_type           = "VirtualAppliance"
    next_hop_in_ip_address  = local.nva-ge3_ip # a local that populates the ip of my network virtual appliance
  }
}

The intention here is that anything matching a specific subnet gets routed through a network virtual appliance to do stuff with (scan, forward, etc).

With Terraform, you can put the routes inside the route table resource, or create them as independent resources and link them to the route table resource.

To enable the ‘variable’ nature of my routes, I created a new variable:

variable "clientnetworks" {
  type = list(map(string))
  default = []
}

You can see this is a list of maps containing string values. This lets me supply input that looks like this:

clientnetworks = [
  # Values must follow CIDR notation, so /32 or /27 or /24 or something
 {
   name = "Clientsubnet1" # Name will be the route name, no spaces
   value = "10.1.1.0/24"
 }
 ,
 {
   name = "Clientsubnet2"
   value = "10.1.2.0/24"
 }
]

 

Now I need to make my ‘route’ block dynamic within the route table. For this, I use dynamic blocks with a foreach expression:

resource "azurerm_route_table" "test-routetable" {
  name                = "testroutes"
  location            = var.location
  resource_group_name = var.resourcegroupname
  disable_bgp_route_propagation = false

  # For each item in the list of this variable map, we create a route
  dynamic "route" {
        for_each = var.clientnetworks
        content {
          name                    = route.value["name"]
          address_prefix          = route.value["value"]
          next_hop_type           = "VirtualAppliance"
          next_hop_in_ip_address  = local.nva-ge3_ip # a local that populates the ip of my network virtual appliance
        }
      }
}

This is saying, “for each item in the clientnetworks variable, create a “route” block within my route table resource, and set it’s contents based upon the values found within the instance of the map variable element.

This achieves my goal, where I can populate the input variable with different contents for each implementation, and yet my resource declaration can stay consistent.

Scenario #2

I need to create network security group rules for the list of map values referenced in Scenario #1. This may be a single value, or multiple items, and I want them all contained within a single NSG rule.

I learned about Terraform Console today which really helped in testing and understanding the correct syntax to use here.

If my input looks like this:

clientnetworks = [
  # Values must follow CIDR notation, so /32 or /27 or /24 or something
 {
   name = "Clientsubnet1" # Name will be the route name, no spaces
   value = "10.1.1.0/24"
 }
 ,
 {
   name = "Clientsubnet2"
   value = "10.1.2.0/24"
 }
]

Then what I want to achieve is a list containing each “value” from each map in my variable list. In effect, “[10.1.1.0/24,10.1.2.0/24]”.

This is done using a “splat” expression, as identified in the Terraform docs.
I can use the following syntax from my variable:

var.clientnetworks[*].value

Putting this into an NSG rule resource, remembering to set plurality on “destination_address_prefixes”:

resource "azurerm_network_security_rule" "any_clientnetwork_any_mgmtnsg" {
  resource_group_name         = var.resourcegroupname
  name                        = "any_clientnetwork_any"
  priority                    = 1300
  direction                   = "Outbound"
  access                      = "Allow"
  protocol                    = "*"
  source_port_range           = "*"
  destination_port_range      = "*"
  source_address_prefix       = "*"
  destination_address_prefixes  = var.clientnetworks[*].value
  network_security_group_name = azurerm_network_security_group.mgmt-nsg.name
  description                 = "This allows outbound to client networks"
}

Last Note

I have tested these scenarios when the input variable exists but is empty, and not good things happen. With the route, if it exists and then I empty the variable, terraform won’t remove the route. But if it is already empty, then the dynamic block evaluates as empty and doesn’t create a route.

For the NSG, Terraform happily passes a validate and a plan, but when applying Azure comes back with an error because it cannot create the resource when the destination prefix is empty.

I could create some conditional logic within that property line to check for when the variable is empty, however I’m already using conditional logic from a different variable for the resource as a whole:

count = var.clientUsingVPN == true ? 1 : 0 # If client is using a VPN, we need this rule
I’m saying “if the client isn’t using a VPN, then don’t create this rule” and at that point, it doesn’t matter whether my variable that is a list of maps is empty or not.

Ingest JSON parameter file into PowerShell

I’m building an Azure DevOps pipeline, and wanted to separate out parameter input into it’s own file which may be re-used across multiple different task types. I investigated a method of declaring parameters in JSON, and then ingesting them in a PowerShell script and using it to populate PowerShell variables.

For the time being, I’ve simplified my code to just use a direct PowerShell dot-sourced file, but want to keep this code snippet here for future reference.

JSON File

{
    "subscriptionid": "3d22393a",
    "resourceGroupName": "test-rg",
    "azureLocation": "EastUS2",
    "automationAccountName": "test-auto",
    "dscConfigurationname": "dsc_baseconfig",
    "dscConfigurationFile": "dsc_baseconfig.ps1",
    "keyvaultName": "kv123"
}

 

PowerShell

# Import parameters from common JSON for reuse
# This file cannot have comments in it!
param(
    [string]$paramFile = '.\dsc_parameters.json'
)
# For each object in the JSON, create a powershell variable
$params = Get-Content $paramFile | ConvertFrom-Json
$params.PSObject.Properties | ForEach-Object {
    New-Variable -Name $_.Name -Value $_.Value -Force
}

 

Azure Function with Managed Identity and REST API

Did you know that you can get the status of an Azure Load Balancer health probe through the Azure REST API? It looks a little something like this:

$filter = "BackendIPAddress eq '*'"
$uri = "https://management.azure.com/subscriptions/$($subscriptionId)/resourceGroups/$($resourcegroupname)/providers/Microsoft.Network/loadBalancers/$($loadbalancername)/providers/microsoft.insights/metrics?api-version=2018-01-01&metricnames=DipAvailability&`$filter=$($filter)"
$response = Invoke-RestMethod -Uri $URI -Method GET -header $RequestHeader

What we’re doing is using the Metric named “DipAvailability” which corresponds to the Portal display of “Health Probe Status”.

We then also apply a filter for “BackendIPAddress” so that it is clear the health count of each member; 100 equals 100% successful probes over the time frame.

I’ve previously configured PRTG to call an Azure Function in order to get health status of Application Gateway, and I wanted to do the same thing here. However, this Function is a bit different, in that I’m not using native Az PowerShell module commands (because they don’t exist for this purpose that I could find) but instead calling the API directly.

Typically, you would authenticate against the REST API with a service principal, and create a request through a series of token-generation and client-grant generation actions.

I didn’t want to do this – it is something I will come back to, to ensure a full understanding of the Oauth flow, but for now I wanted something simpler, like I’ve used before with REST calls against Update Management.

This method I’ve used depends upon a PowerShell script called Get-AzCachedAccessToken that I’ve sourced from the Technet Gallery.

In this case, I needed my Azure Function, using it’s Managed Identity, to call that PowerShell script, use the cached access token to build a Bearer Token object, and pass it to the REST call to authenticate. I could embed the contents of that script into my Function, but that is not scalable nor clean code.

Instead, I can have the Azure Function load that script as a Module at startup, where it becomes available for all other Functions.

First, I take the Get-AzCachedAccessToken.ps1 file, and simply rename it as a *.psm1 file. This will get added to a “Modules” folder within the function (see below). I also found that I needed to find the profile.ps1 file for my function, and add the following:

foreach($file in Get-ChildItem -Path "$PSScriptRoot\Modules" -Filter *.psm1){
    Import-Module $file.fullname
}

You can add these files in a couple different ways. I have my Function developed within Visual Studio Code, and delivered through an Azure DevOps pipeline, which means the code sits in a repository where each function is a folder with a “run.ps1” and “function.json” file. Using the VS Code extension will build out the file framework for you.

Here is where I added the Module and modified the profile.ps1:

Then I use a pipeline task for “Deploy Azure Function App” to push my code within a release pipeline.

The alternative to this code-first approach, is to modify the required files in the Azure Portal.

First you find your Function App, and then scroll down to “Advanced Tools”:

Click the “Go” link, and you’ll be taken to the Kudu Services page for your Function App.

Within the new window that opened, you can to choose Debug Console -> CMD:

This will give you an in-browser display of the file system, where you can navigate to “site” -> “wwwroot”.

Here is where you want to create a new folder named Modules, and upload your *.psm1 file into. You’ll also find profiles.ps1 here which can be edited in the browser as well.

Once those modifications are complete, give your Function app a restart.

Within my PowerShell Function, I now can use the cmdlet Get-AzCachedAccessToken because it has been loaded as a module. I then take its output, add it to a Bearer variable which is passed as the Request Header, to authenticate against the API!

    Select-AzSubscription -SubscriptionID $subscriptionid -TenantID $tenantid
    $BearerToken = ('Bearer {0}' -f (Get-AzCachedAccessToken))
    $RequestHeader = @{
        "Content-Type"  = "application/json";
        "Authorization" = "$BearerToken"
    }
    $filter = "BackendIPAddress eq '*'"
    # Call Azure Monitor REST API to get Metric value
    $uri = "https://management.azure.com/subscriptions/$($subscriptionId)/resourceGroups/$($resourcegroupname)/providers/Microsoft.Network/loadBalancers/$($loadbalancername)/providers/microsoft.insights/metrics?api-version=2018-01-01&metricnames=DipAvailability&`$filter=$($filter)"
    $response = Invoke-RestMethod -Uri $URI -Method GET -header $RequestHeader

For this specific function, what I’m doing with the results are taking the timeseries result that is passed from the metric, bring it down to a 1 (if value is 100) or a 0 (if anything but 100) in order to re-use my PRTG custom lookups rather than building new ones:

# Create our Array List which we'll populate with custom ps object
    [System.Collections.ArrayList]$items = @()
    foreach ($value in $response.value.timeseries) {
        # Need to convert to PRTG expected values, so that we don't need additional lookup files
        $value.data.average[-1]
        # First make sure there is a value
        $convertedvalue = $null
        if ($value.data.average[-1]) {
            # If it is 100 then we're healthy
            if ($value.data.average[-1] -eq 100) {
                $convertedvalue = 1
            }
            else {
                $convertedvalue = 0
            }
        }
        else { # There was no average value found (host is probably off) so set to zero
            $convertedvalue = 0
        }
        $items.Add([pscustomobject]@{  name = $value.metadatavalues.value; health = $convertedvalue; })
    }
    # Expect output of $items to look like this:
    #name      health
    #----      ------
    #10.5.1.68    100
    #10.5.1.13      0
    # Now we wrap our output in a hashtable so PRTG can interpret it properly
    $body = @{ items = $items }
    # This outputs a 100 if Normal, and a 0 if not normal. PRTG sensor will then alert.

See my other post about using PRTG to call an Azure Function for details on how I tie this result into a PRTG sensor.

Simple disk performance test on Linux

I recently had a need to benchmark some Azure VMs, as I’m looking at Azure Storage Fuse and want to evaluate where bottlenecks in performance might be. I’m mostly saving this as a reference so I can come back to it again in the future.

The ‘fio’ tool can be used to perform this benchmarking. I found this originally from an ArsTechnica article.

First, need to create a profile that fio will use. I created a few different ones for different scenarios; mostly around the disk target based on the directory attribute of each reader.

Gist of ini files

# Read test from P20
[global]
size=1g
direct=1
iodepth=8
ioengine=libaio
bs=512k

[reader1]
rw=randread
directory=/tmp
[reader2]
rw=randread
directory=/tmp
[reader3]
rw=randread
directory=/tmp
[reader4]
rw=randread
directory=/tmp

I saved this file to /tmp/fioread.ini. This will give me a test reading a 1GB file with a block size of 512k and iodepth of 8 in a random pattern. 4 readers are configured to maximize throughput.

Then, I called fio with this:

fio --runtime 60 --time_based /tmp/fioread.ini

This runs the test for 60 seconds, and the –time_based ensures it runs for the full time even if the dataset size has been processed (loops back around and begins again).

When I start the command, it will layout files as defined in the .ini file you’re using

Since I have 4 readers, it produces 4 x 1 GB files in /tmp:

Then it immediately goes into the test, and after 60 seconds, produces results.

Each reader will have it’s own block of stats, with a summary at the bottom:

In this case, I’ve gotten 139 MB/s reading from a P4 (30GB) disk on an E8s_V3. The disk has a throughput of 25MB/s but can burst to 170 MB/s, and the E8s_V3 will sustain 128 MB/s on a cached disk (which I have read/write cache on). I’m not exactly sure yet why I was able to exceed 128 MB/s; perhaps part of the reads were already coming from cache from the layout, but not all of them?

I suspect that if I set cache to None on this OS disk, I’d see closer to the 170 MB/s burst, because the E8s_V3 can sustain 192 MB/s on an uncached disk.