Parameterized Update Configuration deployment

After getting a functional script working for deploying an Update Configuration in my last post, I began working on making it worthwhile to use in the future.

I wanted to parameterize the JSON, so that it could be re-used in a much more efficient manner without having to scroll through the file to the JSON body and manually edit sections. This would be needed for both the Subscriptions/ResourceGroups (in the Scope object) and the Tags that I wanted to use.

In this example, I wanted a scheduled deployment that ran once (so I could control the date for the first Wednesday after the second Tuesday of the month), with a maintenance window of 4 hours, with dynamic grouping applied against multiple subscriptions and virtual machines with a defined “MaintenanceWindow” tag.

For the Scope definition, I created a PowerShell array containing the resource groups resource IDs. The format of these can be found in the Portal under “Properties” of the resource group.

# Populate this array with the subscription IDs and resource group names that it should apply to
$scopeDefinition = @(
  "/subscriptions/$subscriptionId/resourceGroups/managementVMs"
  ,"/subscriptions/$subscriptionId/resourceGroups/webVMs"
)

The Tag definition was created as a Hashtable, noting the tag value that I wanted to include:

# Populate this Hashtable with the tags and tag values that should it should be applied to
$tagdefinition = @{
  MaintenanceWindow = @("Thurs-2am-MST-4hours")
}

* side note, since Update Management doesn’t operate on the basis of local time of the VM (as SCCM is capable of) I need to build my maintenance windows around discrete time zones, and have multiple update configurations for these time zones even if the local time that I’m targeting is always the same (i.e. 2am).

I can now convert these Powershell variables into JSON:

$applyResourceGroup = $scopeDefinition | ConvertTo-JSON
$applyTags = $tagdefinition | ConvertTo-JSON

and reference it within my JSON body like this:

"targets": 
      {
        "azureQueries": [{
                  "scope": $applyResourceGroup,
                  "tagSettings": {
                      "tags": $applyTags,
                      "filterOperator": "Any"
                  },
                  "locations": null
              }]
      }

Full Example

Here’s a full example that makes parameters of the full contents of the JSON body:

 
# API Reference: https://docs.microsoft.com/en-us/rest/api/automation/softwareupdateconfigurations/create
### Monthly Parameters ###
$deploymentName = "SUG_Thurs-2am-MST-4hours"
$starttime = "2018-11-15T02:00:00+00:00" 
 
# Scope Parameters
$clientsubscription = "<subscription_id_to_target>"
# Populate this array with the subscription IDs and resource group names that it should apply to
$scopeDefinition = @(
  "/subscriptions/$clientsubscription/resourceGroups/managementVMs"
  ,"/subscriptions/$clientsubscription/resourceGroups/webVMs"
)
 
# Static Schedule Parameters
$AutomationRG = "test-rg" # Resource Group the Automation Account resides in
$automationAccountName = "test-automation"
$automationSubscription = "<subscriptionId>" # Subscription that the Automation Account resides in
# Populate this Hashtable with the tags and tag values that should it should be applied to
$tagdefinition = @{
  MaintenanceWindow = @("Thurs-2am-MST-4hours")
}
$duration = "PT4H0M" # This equals maintenance window - Put in the format PT2H0M, changing the numbers for hours and minutes
$rebootSetting = "IfRequired" # Options are Never, IfRequired
$includedUpdateClassifications = "Critical,UpdateRollup,Security,Updates" # List of options here: https://docs.microsoft.com/en-us/rest/api/automation/softwareupdateconfigurations/create#windowsupdateclasses
$timeZone = "America/Edmonton" # List from ??
$frequency = "OneTime" # Valid values: https://docs.microsoft.com/en-us/rest/api/automation/softwareupdateconfigurations/create#schedulefrequency
#$interval = "1" # How often to recur based on the frequency (i.e. if frequency = hourly, and interval = 2, then its every 2 hours)
 
 
### These values below shouldn't need to change
Select-AzureRmSubscription -Subscription "$automationSubscription" # Ensure PowerShell context is targeting the correct subscription
$applyResourceGroup = $scopeDefinition | ConvertTo-JSON
$applyTags = $tagdefinition | ConvertTo-JSON
# Get the access token from a cached PowerShell session
. .\Get-AzureRmCachedAccessToken.ps1 # Source = https://gallery.technet.microsoft.com/scriptcenter/Easily-obtain-AccessToken-3ba6e593
$BearerToken = ('Bearer {0}' -f (Get-AzureRmCachedAccessToken))
$RequestHeader = @{
  "Content-Type" = "application/json";
  "Authorization" = "$BearerToken"
}
 
# JSON formatting to define our required settings
$Body = @"
{
"properties": {
  "updateConfiguration": {
    "operatingSystem": "Windows",
    "duration": "$duration",
    "windows": {
      "excludedKbNumbers": [],
      "includedUpdateClassifications": "$includedUpdateClassifications",
      "rebootSetting": "$rebootSetting"
    },
    "azureVirtualMachines": [],
    "targets": 
      {
        "azureQueries": [{
                  "scope": $applyResourceGroup,
                  "tagSettings": {
                      "tags": $applyTags,
                      "filterOperator": "Any"
                  },
                  "locations": null
              }]
      }
  },
  "scheduleInfo": {
    "frequency": "$frequency",
    "startTime": "$starttime",
    "timeZone": "$timeZone",
    "interval": $interval,
    "isEnabled": true
  }
}
}
"@
 
# Build the URI string to call with a PUT
$URI = "https://management.azure.com/subscriptions/$($automationSubscription)/" `
   +"resourceGroups/$($AutomationRG)/providers/Microsoft.Automation/" `
   +"automationAccounts/$($automationaccountname)/softwareUpdateConfigurations/$($deploymentName)?api-version=2017-05-15-preview"
 
# use the API to add the deployment
$Response = Invoke-RestMethod -Uri $URI -Method Put -body $body -header $RequestHeader

 

Azure Update Management deployments – programmatically

I’ve been working through a method to add Scheduled Deployments with Azure Update Management. The particular problem is that the docs for Update Management don’t reference any kind of programmatic way to add Scheduled Deployments, and while I did find the PowerShell cmdlets (in preview), they don’t allow for use of the new features such as Reboot Control and Dynamic Groups.

After a little bit more investigation, I came across the Azure REST API for “Software Update Configurations”. Using a REST API is new to me but at first glance it seems like it would achieve what I wanted, so I dove in.

GET Configuration

I wanted to test with a GET command first, to make sure that I understood how to use the API correctly and interpret the output.

First I created a simple update deployment in the Portal to test with.

When I began looking at how to make the API call, I found a lot of instruction on creating a Service Principal in Azure Active Directory, in order for my code to authenticate against the Azure REST API and make the calls it needs. In my case I’m building a script to run ad-hoc, not part of an integrated program or code-base that needs repeated and fully automated authentication. My thinking was, “if I’m authenticated to Azure in a PowerShell session (through Login-AzureRMAccount) why can’t I just use that to make the call?”

This exact thing is possible, as I discovered through this PowerShell Gallery submission. After loading and testing with this function, I found that it could successfully use my PowerShell session as the access token for my API call.

Putting that together with the GET example, I ended up with this PowerShell script:

# Define my parameters to the Automation Account
$resourceGroupName = "test-rg"
$automationAccountName = "test-automation"
$SubscriptionId = "subscription id"
$ConfigName = "2018-08-SUG"
 
# Import the function, contained in a file from the same directory
. .\Get-AzureRmCachedAccessToken.ps1
# Use the function to get the Access Token
$BearerToken = ('Bearer {0}' -f (Get-AzureRmCachedAccessToken))
# Add the Access Token into proper format
$RequestHeader = @{
    "Content-Type"  = "application/json";
    "Authorization" = "$BearerToken"
}
# Build the URI referencing my parameters
$URI = "https://management.azure.com/subscriptions/$($SubscriptionID)/"`
    + "resourceGroups/$resourcegroupname/providers/Microsoft.Automation/"`
    + "automationAccounts/$automationaccountname/softwareUpdateConfigurations/$($ConfigName)?api-version=2017-05-15-preview"
 
# Use the URI and the Request Header (with access token) and the method GET
$GetResponse = Invoke-RestMethod -Uri $URI -Method GET -header $RequestHeader

 

This returned output that looked like this:

id         : /subscriptions/f745d13d/resourceGroups/test-rg/providers/Microsoft.Autom
             ation/automationAccounts/test-automation/softwareUpdateConfigurations/2018-08-SUG
name       : 2018-08-SUG
type       :
properties : @{updateConfiguration=; scheduleInfo=; provisioningState=Succeeded; createdBy={scrubbed}; error=; tasks=;
             creationTime=2018-08-16T14:09:34.773+00:00; lastModifiedBy=;
             lastModifiedTime=2018-10-24T03:56:51.02+00:00}

Fantastic! This can be turned into read-able JSON by piping your $GetResponse like this:

 $GetResponse | ConvertTo-JSON

PUT Configuration

Let’s look at what it takes to use the PUT example from the REST API reference. Primarily, the difference is passing in a JSON Body to the “Invoke-RestMethod”, and changing the method to PUT rather than GET.

Before I get to working examples, I ran into problems trying to use the API reference that I want to highlight. I kept getting obscure errors on the “Targets” section of the JSON body that I couldn’t figure out for a while, until I began looking very closely at the output of my GET from an update deployment created in the portal and compared it against what I was trying to do with my JSON.

Something that particularly helped here was piping my “$Get-Response” like this:

$GetResponse | ConvertTo-JSON -Depth 10

This converts the output of the RestMethod into JSON, and ensures that the default depth of 5 is overridden so that it expands all the objects within the Targets array.

What I noticed is that the “Targets” object returns a proper structure with a single nested object (“azureQueries”) which itself is a list (as denoted by the square brackets):

 "targets": {
                "azureQueries": [
                    {
                        "scope": [ ],
                        "tagSettings": {},
                        "locations": null
                    }
                ]
            }

However, this is what the API reference uses as it’s structure:

"targets": [
    {
      "azureQueries": {
        "scope": [ ],
        "tagSettings": { },
        "locations":  null
      }
    }
  ]

Note that the square brackets for the Target object shouldn’t be there, and that they’re missing on the AzureQueries object.

Once this was solved, my very basic “Create” script with static JSON seemed to work.

Working Example

Here’s an example of my full PowerShell script, which creates a recurring schedule for Definition Updates applied against a specific resource group with any VM having the tag = “Client1”.

# Define my parameters to the Automation Account
$resourceGroupName = "test-rg"
$automationAccountName = "test-automation"
$SubscriptionId = "&lt;subscription_id&gt;"
$ConfigName = "MalwareDefinitions"
 
# Import the function, contained in a file from the same directory
. .\Get-AzureRmCachedAccessToken.ps1
# Use the function to get the Access Token
$BearerToken = ('Bearer {0}' -f (Get-AzureRmCachedAccessToken))
# Add the Access Token into proper format
$RequestHeader = @{
    "Content-Type"  = "application/json";
    "Authorization" = "$BearerToken"
}
 
$Body = @"
{
  "properties": {
    "updateConfiguration": {
      "operatingSystem": "Windows",
      "duration": "PT0H30M",
      "windows": {
        "excludedKbNumbers": [    ],
        "includedUpdateClassifications": "Definition",
        "rebootSetting": "Never"
      },
      "azureVirtualMachines": [
 
      ],
      "targets": 
        {
          "azureQueries": [{
                    "scope": [
                        "/subscriptions/$SubscriptionId/resourceGroups/$resourceGroupName"
                    ],
                    "tagSettings": {
                        "tags": {
                            "ClientID": [
                                "Client1"
                            ]
                        },
                        "filterOperator": "Any"
                    },
                    "locations": null
                }]
        }
    },
    "scheduleInfo": {
      "frequency": "Hour",
      "startTime": "2018-10-31T12:22:57+00:00",
      "timeZone": "America/Los_Angeles",
      "interval": 2,
      "isEnabled": true
    }
  }
}
"@
 
$URI = "https://management.azure.com/subscriptions/$($SubscriptionId)/"`
    + "resourceGroups/$($resourcegroupname)/providers/Microsoft.Automation/"`
    + "automationAccounts/$($automationaccountname)/softwareUpdateConfigurations/$($ConfigName)?api-version=2017-05-15-preview"
 
#This creates the update scheduled deployment
$Response = Invoke-RestMethod -Uri $URI -Method Put -body $body -header $RequestHeader

 

Azure NSG discovery

During deployment of some resources with an Azure virtual network which has subnets with network security groups (NSG) applied, I made a new discovery that I didn’t previously know. It makes sense in the context of how Azure applies NSG rules, but it doesn’t align with a traditional understanding of firewall ACLs across a subnet.

Communication within subnet

If you apply a Deny rule that has a lower priority than the default 65000 “Allow Vnet inbound”, it will also deny resources within that subnet from communicating with each other.

I discovered this while applying a “Deny inbound” rule in order to restrict lateral movement between subnets, not intending to restrict traffic within a subnet.

For example, I have a “management” subnet, with an NSG applied. Inside this subnet is an AD domain controller, and a member server. I apply a Deny rule for any source, after my “allow incoming” rules have been applied to let other subnets talk to this domain controller.

Now I find that my domain controller cannot reach my member server, despite it residing within the same subnet.

While I do not want to allow service tag “VirtualNetwork” incoming access (again, to restrict lateral movement), I do want “everything inside this subnet can talk to everything inside this subnet”. As such I had to create a specific rule for this behavior.

Azure Function – Resolve DNS

As part of my search to provide outbound Deny on an Azure NSG with whitelisted FQDN entries, I started looking at Azure Functions.

The idea is that I would have an Automation runbook on a schedule, which called my function for a variety of domain names, receiving the resolved IP addresses in return. These would then be compared against outbound NSG rules, and if the resolved IP differs from what is in the NSG, it would update it.

In reality there isn’t much need for this, since you can do the DNS resolution right in the runbook with this:

$currentIpAddress = [system.net.dns]::GetHostByName("$fqdn").AddressList.IPAddressToString

There are other limitations with this idea as well:

  • for a globally-managed DNS behind some type of CDN or round-robin mechanism, its possible that IP resolution would continually be different. Take “smtp.office365.com” for example.
  • There isn’t a way to manage wildcard whitelists – “*.windowsupdate.com” isn’t something you can resolve to individual IP addresses.

All that being said, I still used this as a learning opportunity for my first function.

To begin, in the Azure Portal I went to the “App Services” blade, clicked “Add”, and searched for Function:

During creation I accepted most of the defaults, and was left with a v2 Function App and the initial “HttpTriggerCSharp1” function.

I am by no means a programmer, and certainly not familiar with C# from ASP.net Core as evidenced by my previous post. With that in mind, here is the contents of my function that I ended up with:

#r "Newtonsoft.Json"
 
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
using System.Text;
 
public static async Task&lt;string[]&gt; Run(HttpRequest req, ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");
 
    string name = req.Query["name"];
 
    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    name = name ?? data?.name;
    List collectedIP = new List();
    IPAddress[] ipaddresses = null;
    if (name != null){
        try
        {
            // Putting this in the Try because if it errored out I wanted to see that
            // as a defined message rather than failure of the function
            ipaddresses = Dns.GetHostAddresses(name);
        }
        catch (Exception)
        {
            log.LogInformation("Did not resolve IP from: " + name);
            collectedIP.Add("Did not resolve");
        }
 
        if (ipaddresses != null)
            {
                // Knowing that multiple IPs could be returned for a record, used a ForEach
                foreach (IPAddress ip in ipaddresses)
                {
                    log.LogInformation("Resolved " + name + " to " + ip.ToString());
                    // Add the resolved IP to a string list
                    collectedIP.Add(ip.ToString());
                    log.LogInformation("Added IP to list");
                }
                log.LogInformation("End of If Ipaddresses isn't null");
 
            }
            log.LogInformation("End of If Name isn't null");
    }     
    else
    {
        //return a string
        log.LogInformation("No IP passed In");
        collectedIP.Add("No IP passed in");
    }
    log.LogInformation("Ready to return value");
    // Return the string list as an array to the calling entity
    return collectedIP.ToArray();
}

Now I run this function in Test mode, with a Query parameter as “name”:

And I get results both in my log, and the Output:
To bring this into my Automation runbook, I retrieved the function URL, and since this is a private function it includes my key value in it:
This PowerShell command is then used to invoke the function, with the parameter at the end of the URL:
$IPList = invoke-webrequest 'https://functionappname.azurewebsites.net/api/HttpTriggerCSharp1?code=&lt;privatekey&gt;&amp;name=www.microsoft.com'
Due to the limitations I mentioned at the start of this post, I never went far enough in my runbook to connect this $IPList into logic for updating the NSG.

Azure IaaS Deny outbound considerations

As a general practice, outbound Internet access should be denied except for approved destinations. This is referenced in NIST 800-41 as a “deny by default” posture.

Achieving this within Azure Infrastructure as a Service in a practical and economical way without breaking a large amount of services is quite difficult at the moment.

If Outbound Internet is fully denied, some of the commonly used services of Azure will cease to work:

  • Azure Backup
  • Log Analytics
  • Azure State Configuration (DSC)
  • Azure Update Management
  • Azure Security Center
  • Windows Update

Some of these are not as difficult to solve – Service Tags on NSG rules can allow Azure services where they have been defined by Microsoft. As of Ignite 2018 in late September, there are new service tags covering entire regions, or all of Azure (“AzureCloud”). This means you can allow most of those services above to function and still deny general Internet outbound.

Additional Service Tags for Windows Update, or custom definitions are supposed to be coming in the future, but this doesn’t fully resolve the problem.

What if your application has a GIS component, and it needs to reach *.arcgisonline.com? What if your users have a legitimate reason to access a particular website? It isn’t good enough to just resolve that IP address one time and add it to an NSG.

What is really needed is a method to allow access to a fully qualified domain name (FQDN), particularly with wildcard support.

Here are some possible solutions:

Implement a 3rd Party network virtualization appliance (NVA)

This is the most common response that I see recommended to the outbound problem. Unfortunately, it is really expensive, and overkill if you’re only address this one particular problem. One has to consider high availability of the resources, as well as management of them since you’re just adding more IaaS into your environment, which is what we’re all trying to get away from when we’re using the cloud isn’t it?

Some vendors may not support wildcard FQDN in it’s ACLs (Barracuda CloudGen last I checked), which means you can’t support things like Windows Update where no published IP list exists.

If the implementation is anything like SonicWALL’s method, it will have difficulty being reliable – this relies upon the SonicWALL using the same DNS server as the client (calling it ‘sanctioned’) which may or may not be true in your Azure environment with the use of Azure DNS or external providers.

Implement Azure Firewall

Azure Firewall is new on the scene and released to General Availability as of late September 2018. It supports the use of FQDN references in application rules, and while I haven’t personally tested it, the example deployment template is shown to allow an outbound rule to *microsoft.com.

Confusingly, their documentation states that FQDN tags can’t be custom created, but I believe this just references groups of FQDN, not individual items.

Azure Firewall solves the problem of deploying more IaaS, and it’s natively highly available. However it again isn’t cheap, at $1.25/hour USD it is a high price to pay for just this one feature.

Wait until FQDN support exists in an NSG rule

It has been noted on the Microsoft feedback site that NSG rules containing FQDN is a roadmap item, but since this hasn’t received the “Planned” designation yet, I expect it is very far down the roadmap; particularly considering this feature is available in the Azure Firewall.

Build something custom – Azure Function or runbook which resolves DNS and adds it to your NSG

I’ve toyed with the idea of building a custom Azure Function or Automation runbook which can resolve a record and add it to an NSG. I’ll have a post on the Function side of this coming soon that describes how it would work, and the limitations that made me discard the idea.

Realistically, this isn’t a long-term viable solution as it doesn’t solve the wildcard problem.

Utilize an outbound transparent proxy server

This method involves trusting some other source to proxy your outbound traffic and depending on that source, gives a large amount of flexibility to achieve the outbound denial without breaking your services.

This could be an IaaS resource running Squid or WinGate (a product I’m currently testing for this purpose), or it could be an external 3rd party service like zScaler which specializes in access control of this nature.

To make this work, your proxy must be able to be identified by some kind of static IP to allow it through the NSG, but after that the whitelisting could happen within the proxy service itself.

I see this as the most viable method of solving the problem until either FQDN support exists for NSG, or Azure Firewall pricing comes down with competition from 3rd party vendors.