VM Images – Immutable builds and Azure resources

I’ve begun poking around with custom Azure managed images, using Packer as an on-premises build source. The goal is to use the same image within an on-premises test environment and in an Azure production environment, and take a small step towards immutable infrastructure.

There are lots of interesting questions around this topic that may have unique answers in different environments. Here’s a few thoughts on what I see where I am right now:

  • Should I use Packer to build a traditional Hyper-V image, convert it to VHD and upload it to Azure, or directly use the Packer builder for Azure?
    • Because I have the on-prem resources, am building upon a pre-existing framework where local-only images are built, and need to build images for both Hyper-V and Azure, I decided to keep it consistent rather than split the build chain off to a whole new builder.
  • Should I just use the Azure Image Builder to streamline the process?
    • Maybe eventually – again, I wanted to incrementally build upon the successes of existing on-premises deployments. The Image Builder service is very intriguing, and would be a next logical step once it leaves Preview.
  • Why do images need to be built for on-premises in the first place? Why not native Azure resources?
    • While deploying into Azure would offer a lot more flexibility and scale, there are still many reasons to maintain a local presence for resources, but it mostly boils down to financial: pre-existing CapEx investments exist vs new OpEx costs that would be realized, without the appropriate systems in place for constraining size, resource lifetime, and ultimately cost overflows
  • Why build images, and not do configuration management after deploy?
    • I go back and forth on this question frequently, and I have seen much conversation about it. Like most of IT, “it depends”. Using customized images that are version controlled provides the infrastructure ability to shift-left, and ensure quality is in the build repeatably and consistently. What if you’re doing post-deployment config management, and DNS isn’t available, or a service crashes halfway through, or any number of other things that can go wrong? Now there is a delay in the availability of that resource you’ve deployed, effort consumed to resolve the problem, and a lack of confidence in its quality.
    • Immutable builds do not natively solve the problem of configuration drift post-deployment, and this is one of the big gaps that I see trying to take traditional IaaS and fit it into a more modern profile. The ‘answer’ is to monitor drift and re-build from source (cattle not pets) when it is detected, but not everyone is working with modern micro-services running in containers orchestrated centrally to achieve this. Instead, there may be an intermediary step where immutable image builds are used, along with configuration management post-deployment to watch for drift.

 

Once I got to a place where an image is ready, I began poring over the Microsoft Docs on managed images and Shared Image Gallery, prior to testing.

I intended on a distribution flow something like this:

Packer drops VHD in Blob storage -> Create Managed Image -> Use Shared Image Gallery definition -> Create Image Version

The documentation left me with a few unanswered questions, which I’ve outlined here:

  • What if I remove the original blob, can I still use the image? Yes, you can continue to deploy the managed image without the source blob
  • What if the blob gets replaced, does it update the image? No, any future deployments of the image will continue to be delivered as when the image was created
  • What if I remove the source managed image, can I still use the Shared Image Gallery definition version?
    • According to the guidance from Microsoft: Yes, but if you plan on adding replica regions, do not delete the source managed image. The source managed image is needed for replicating the image version to additional regions.
  • What if I update the source managed image, does it update the Shared Image Gallery definition version?  Mostly no, similar to the blob-to-image relationship, if you update the source managed image, the version in a SIG definition doesn’t update. What I need to test is what happens if you replace the source managed image, and then replicate an image definition version to a new region – will it contain the updates in the image?

Here’s a few other important design discoveries I’ve made along the way:

  • A VM from an Azure Managed Image can only be deployed within the same Region and Subscription as the Image (i.e. if you want to re-use the image across multiple regions or subscriptions, you’ll have to create additional images to suit
  • A VM from a Shared Image Gallery definition CAN be deployed outside a subscription, from any region it is replicated, as long as the authentication mechanism performing the VM deployment has RBAC over the Shared Image Gallery resource
  • Microsoft says “as a best practice, we encourage you to keep the resource group, shared image gallery, image definition, and image version in the same location.” I can confirm that if the resource group you place the Shared Image Gallery in is in a different Location than the SIG itself or the image definition, there are no barriers to creating those resources or a VM from them
  • Terraform AzureRM provider support (as of today at least) has limitations in managing Shared Image components:
    • You cannot set properties for VM Generation on the image definitions
    • Resource removal does not respect the dependencies between an image version, definition, and gallery

 

At the end of the day, I’ve come up with the following flow which builds and delivers my images:

Azure Function called from PRTG REST sensor

Using PRTG for service monitoring has been fairly effective for me, particularly with HTTP Advanced sensors to monitor a website. However, as more and more Azure resources are utilized, I want to continue to centralize my alerting and notifications within a single platform and that means integrating some Azure resource status into PRTG.

At a high level, here’s what needs to happen:

  • Azure Function triggered on HTTP POST to query Azure resources and return data
  • PRTG custom sensor template to interpret the results of the Function data
  • PRTG custom lookup to establish a default up/down threshold
  • PRTG REST sensor to trigger the function, and use the sensor template and custom lookup to properly display results

Azure Function

For my first use-case, I wanted to see the health status of the back-end pool members of an Application Gateway:

The intended goal is if a member becomes unhealthy, PRTG would alert using our normal mechanisms. I could use an Azure Monitor alert to trigger something when this event happens, but in reality it is easier for PRTG to poll rather than Azure Monitor to trigger something in PRTG.

I’m not going to cover the full walk-through of building an Azure Function; instead here is a good starting point.

I’m using a PowerShell function, where the full source can be found here: GitHub link

Here’s a snippet of the part doing the heavy lifting:

#Proceed if all request body parameters are found
if ($appgwname -and $httpsettingname -and $resourcegroupname -and $subscriptionid -and $tenantid) {
    $status = [HttpStatusCode]::OK
    # Make sure we're using the right Subscription
    Select-AzSubscription -SubscriptionID $subscriptionid -TenantID $tenantid
    # Get the health status, using the Expanded Resource parameter
    $healthexpand = Get-AzApplicationGatewayBackendHealth -Name $appgwname -ResourceGroupName $resourcegroupname -ExpandResource "backendhealth/applicationgatewayresource"
    # If serving multiple sites out of one AppGw, use the parameter $httpsettingname to filter so we can better organize in PRTG
    $filtered = $healthexpand.BackEndAddressPools.BackEndhttpsettingscollection | where-object { $_.Backendhttpsettings.Name -eq "$($httpsettingname)-httpsetting" }
    # Return results as boolean integers, either health or not. Could modify this to be additional values if desired
    $items = $filtered.Servers | select-object Address, @{Name = 'Health'; Expression = { if ($_.Health -eq "Healthy") { 1 } else { 0 } } }
    # Add a top-level property so that the PRTG custom sensor template can interpret the results properly
    $body = @{ items = $items }
 
}

You can test this function using the Azure Functions GUI or Postman, or PowerShell like this:

    $appsvcname = "appsvc.azurewebsites.net"
    $functionName = "Get-AppGw-Health"
    $functionKey = " insert key here "
    $Body = @"
{
    "httpsettingname": " prodint ",
    "resourcegroupname": " rgname ",
    "appgwname": " appgw name ",
    "subscriptionid": " subid ",
    "tenantid": " tenant id "
}
"@
    $URI = "https://$($appsvcname)/api/$($functionName)?code=$functionKey"
    Invoke-RestMethod -Uri $URI -Method Post -body $body -ContentType "application/json"

Expected results would look like this:

You can see the “Function Key” parameter in this code above; I’ve created a function key for our PRTG to authenticate against, rather than making this function part of a private VNET.

 

PRTG Custom Sensor Template

Now, in order to have PRTG interpret the results of that JSON body, and automatically create channels associated with each Item, we need to use a custom sensor template.

Here’s mine (github link):

{
  "prtg": {
    "description" : {
      "device": "azureapplicationgateway",
      "query": "/api/Get-AppGw-Health?code={key}",
      "comment": "Documentation is in Doc Library"
    },
    "result": [
      {
 
	"value": {
            #1: $..({ @.Address : @.Health }).*
        },
        "valueLookup": "prtg.customlookups.healthyunhealthy.stateonok",
        "LimitMode":0,
        "unit": "Custom",
      }
    ]
  }
}

The important part here is the “value” properties. The syntax for this isn’t officially documented, but Paessler support has provided a couple examples that I used, such as here and here. The #1 before the first semi-colon sets the channel name, and uses the first argument referenced within the braces (@.Address in this case). What is inside the braces associates the channel name to the value that is returned, which in this case is the boolean integer for “Health” that the Azure Function returns.

The valueLookup property references the custom lookup explained below.

This Sensor Template file needs to exist on the PRTG Probe that will be calling it, in this location:

Program Files (x86)\PRTG Network Monitor\Custom Sensors\rest

 

PRTG Custom Lookup

I want to be able to control what PRTG detects as “down” based on the values that my Function is returning. To do so, we apply a custom lookup (github link):

<?xml version="1.0" encoding="UTF-8"?>
  <ValueLookup id="prtg.customlookups.healthyunhealthy.stateonok" desiredValue="1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="PaeValueLookup.xsd">
    <Lookups>
      <SingleInt state="Error" value="0">
        Unhealthy
      </SingleInt>
      <SingleInt state="Ok" value="1">
        Healthy
      </SingleInt>
    </Lookups>
  </ValueLookup>

This matches my 0 or 1 return values to the PRTG states that a channel can be in. You can modify this to suit your Function return values.

This lookup file needs to exist on the PRTG Core, in this location:

Program Files (x86)\PRTG Network Monitor\lookups\custom

PRTG REST sensor

So lets put it all together now. Create a PRTG REST Sensor, and apply the following settings to it:

Your PostData should match the parameters you’re receiving into your Azure Function. The REST query directs to the URL where your function is located, and uses the function key that was generated for authentication purposes.

Important to note: the REST sensor depends upon the Device it is created under to generate the hostname for URL. This means you’ll need to have a Device created with a hostname matching your Function App URL, where the sensor’s REST Query references the function name itself.

Make sure the REST Configuration is directed to your custom sensor template; this can only be set on creation.

Then for each channel that gets auto-detected, you will go and modify it’s settings and apply the custom lookup:

With that, you should have a good sensor in PRTG relaying important information collected from Azure!

I’m also using this same method to collect Azure Site Recovery status from Azure and report it within PRTG, using this Function.

Azure DevOps Pipeline or Release

After a significant amount of time envisioning a DevOps pipeline, hearing about them, reading about them on Twitter, and watching other’s create them I finally made some time to get some hands on experience. I chose to work my Azure DSC configuration deployment into a pipeline, which I’ll blog about sometime in the near future.

First, I ran into an almost immediate question: what is the difference between a Pipeline, and a Release?

From previous reading, I had learned that Continuous Integration (CI) was related to Build pipelines, which the first item in the menu was previously called. Then subsequently Continuous Deployment (CD) was a Release pipeline and separated to its own menu item.

However when I started playing around this week, what I found was a very confusing similarity between the two.

I could make a Pipeline with YAML or a classic editor, both including tasks and steps for build and releases, and when I went to save it would store a YAML file in my connected repository. This covers CI/CD in full and left me wondering where Releases fit.

I could also create a Release pipeline, in a graphical format only, with an Artifacts and a Stages section that seemingly had no connection to a repository other than creating a runtime artifact from one.

Reading the excellent Azure Docs for “Use Azure Pipelines“, it shed a little bit more light on this:

  1. Define pipelines using YAML syntax
    1. “Your code is now updated, built, tested, and packaged. It can be deployed to any target.”
  2. Define pipelines using the Classic interface
    1. “Use the Azure Pipelines classic editor to create and configure your build and release pipelines”

In short, the concept of build AND release pipelines are now effectively the Classic mode of Azure Pipelines, with the new hotness being a single pipeline, created through YAML and stored as code within a branch in your repository.

I understand why Microsoft has not fully removed the classic method, but it does take some time to understand for someone like me jumping in during the middle of a transition.

As it stands right now on the Feature Availability table, it looks like Deployment Groups and Gates are the only features that that YAML pipelines are not able to do compared to the classic Release pipeline.

Azure Site Recovery – ARM template

Programmatic deployment of Azure Site Recovery for Azure VMs – that’s the target I started with for a project a little while ago.

There is a large amount of information for Azure Site Recovery on Microsoft’s Docs, however the amount of available code online to programmatically deploy a full setup is very sparse! Much of what Microsoft provides is PowerShell, which isn’t idempotent and doesn’t fit with my current tooling (Terraform and declarative infrastructure-as-code). I did consider Azure CLI, but couldn’t find any references for Site Recovery.

While working on this I came across an Azure Quickstart which is a little incomplete and starts with some good variable definition but quickly devolves into hard-coded values from where it came from; I also discovered a blog post from Pratap Bhaskar  which was useful especially for understanding the Loop mechanism in the template, but it didn’t go far enough for my purposes.

So I spun up a few VMs, manually configured ASR, and did an ARM template export. What I received was a huge amount of properties on the resources that I was sure were relevant to runtime only, not creation. This was also a good reference, but not exactly where I needed to be.

The final piece of the puzzle that got me on my way was the REST API docs for Site Recovery. With this in hand, and the other sources I had at my disposal, I had the references I needed to begin putting together an ARM Template that would configure my environment end-to-end including Recovery Plans with automation runbooks.

There are a lot of design decisions I made when building this to fit my environment, some of which won’t make sense without additional context; most of which I can’t provide. That’s ok, as I hope it at least serves as a reference for “what’s possible” to others who come across it. Here is the overall structure:

  • Pre-define and create destination resources like resource groups, virtual networks, and subnets with Terraform
  • Deploy ASR for a subset of Virtual Machines, targeting the destination resources
    • Include dependent resources like source-side storage account for cache, and azure automation account in the same region and subscription as the ASR resources
  • Deploy a Recovery Plan that provides runbook functionality to configure a Test Failover environment
    • This environment was intended to be completely isolated, to ensure there’s no chance of contamination with prod
    • Current design of my web servers has multiple ip configurations; these need to be replaced
    • Access to the environment is provided through a Jump Host, which needs a known-in-advance IP address

 

I have documented this output on a GitHub repo called arm-azuresiterecovery

 

 

The majority of my time was actually spent cleaning up the template into proper parameters and variables for effective re-use, and then solving all the syntax challenges and typos that come along with that.

 

There are still some loose ends in what I’ve created, around certain manual steps still required. However as I’m sure is common in the industry, it is good enough to deploy and I must move on – fine-tuning comes later.

Invoke-AzVMRunCommand log output

While working with the Invoke-AzVmRunCommand cmdlet, I encountered a problem, receiving the following error:

Invoke-AzVMRunCommand : Long running operation failed with status 'Failed'. Additional Info:'VM has reported a failure w
hen processing extension 'RunCommandWindows'. Error message: "Finished executing command".'

I’m attempting to push a DSC configuration into a VM and run it, and although the PowerShell runs successful when run from inside the VM, I still see this error.

I came across a github issue with a helpful tip: the log output of Invoke-AzVmRunCommand can be viewed from this path:

C:\Packages\Plugins\Microsoft.CPlat.Core.RunCommandWindows\<version>\Status

This helped me determine what the problem was and how to solve it.