Terraform nested for_each example

Today I needed a double for_each in my Terraform configuration; the ability to for_each over one thing, and at the same time for_each over another thing.

Here’s the context:

I want to produce two Azure Private DNS Zones, with records inside each of them, but conditionally. Think of it as ‘zones’ – zone A and zone B will be unique in their identifiers, but have commonalities in the IP addresses used.

I want do to this conditionally (a zone may not always exist) but also without repeating myself in code.

Lets start with a variable Map of my zones:

variable "zoneversions" {
  default = {
        "zonea" = {
            "zonename" = "a",
            "first3octets" = "10.9.3"
        },
        "zoneb" = {
            "zonename" = "b",
            "first3octets" = "10.9.4"
        }
    }
}

Here I’m creating an object that will work with for_each syntax. You’ll note I’m including additional attributes that are unique to each zone – this will come in handy later.

This variable allows me to create my Azure DNS private zones like this:

resource "azurerm_private_dns_zone" "zones-privatedns" {
  for_each            = var.zoneversions
  name                = "${each.value.zonename}.domain.com"
  resource_group_name = azurerm_resource_group.srv-rg.name
  }
}

This is using the “each.value” syntax, referencing the attributes of each zone. This terraform will produce the Private DNS zones described in the image above.

Now I want to populate each zone with records.
First, I’m going to use a local variable (could be a regular variable too) that will create a map of keys (common parts of server names) and values (last octet of the ip addresses):

locals {
  ipaddresses = {
    web                = ".3"
    rdp                = ".4"
    dc                 = ".10"
    db                 = ".11"
  }
}

For each zone that I have (a or b), I want to create a DNS record for each key in this map (hence the double for_each). Terraform won’t let you combine a for_each and count, and it doesn’t natively support 2 for_each expressions.

After a lot of trial and error (using terraform console to test) I came up with the code below. This article with a post by ‘apparentlysmart’ was a big help in the final task and helped me understand the structure of what I was trying to build.

I need 2 new local variables. The first will produce a flattened list of the combinations I’m looking for. And then since for_each only interacts with maps, I need a second local to convert it into that object type.

zonedips-list = flatten([ # Produce a list of maps, containing a name and IP address for each zone we specify in our variable
    for zones in var.zoneversions: [
      for servername,ips in local.ipaddresses: {
        zonename = "${zones.zonename}"
        name = "${zones.zonename}${servername}"
        ipaddress = "${zones.first3octets}${ips}"
      }
    ]
  ])
 
  zonedips-map = { # Take the list, and turn it into a map, so we can use it in a for_each
    for obj in local.zonedips-list : "${obj.name}" => obj # this means set the key of our new map to be $obj.name (hfx23-ti-web1) and => means keep the attributes of the object the same as the original
  }

Then I can use that second local when defining a single “azurerm_private_dns_a_record” resource:

resource "azurerm_private_dns_a_record" "vm-privaterecords" {
  for_each            = local.zonedips-map
  name                = each.value.name
  zone_name           = azurerm_private_dns_zone.zones-privatedns[each.value.zonename].name
  resource_group_name = azurerm_resource_group.srv-rg.name
  ttl                 = 300
  records             = [each.value.ipaddress]
}

This is where the magic happens. Because my map “zonedips-map” has attributes for each object, I can reference them with the ‘each.value’ syntax. So the name field of my DNS record will be equivalent to “${zones.zonename}${servername}”, or “aweb/bweb” as the for_each iterates. To place these in the correct zone, I’m using index selection on the resource, within the “zone_name” attribute – this says refer to the private_dns_zone with the terraform identifier “zones-privatedns” but an index (since there are multiple) that matches my version name.

This is where terraform console comes in real handy; I can produce a simple terraform config (without an AzureRM provider) that contains these items, with either outputs, or a placeholder resource (like a file).

For example, take the terraform configuration below, do a “terraform init” on it, and then “terraform console” command.

terraform {
  backend "local" {
  }
}
 
locals {
  zonedips-list = flatten([
    for zones in var.zoneversions: [
      for servername,ips in local.ipaddresses: {
        zonename = "${zones.zonename}"
        name = "${zones.zonename}${servername}"
        ipaddress = "${zones.first3octets}${ips}"
      }
    ]
  ])
 
  zonedips-map = {
    for obj in local.zonedips-list : "${obj.name}" => obj
  }
 
  ipaddresses = {
    web                = ".3"
    rdp                = ".4"
    dc                 = ".10"
    db                 = ".11"
  }
}
 
variable "zoneversions" {
  default = {
        "zonea" = {
            "zonename" = "a",
            "first3octets" = "10.9.3"
        },
        "zoneb" = {
            "zonename" = "b",
            "first3octets" = "10.9.4"
        }
    }
}
 
resource "local_file" "test" {
    for_each = local.zonedips-map
    filename    = each.value.name
    content     = each.value.ipaddress
}

You can then explore and display the contents of the variables or locals by calling them explicitly in the console:

So we can display the contents of our flattened list:

And then the produced map:

 

Finally, we can do a “terraform plan”, and look at the file resources that would be created (I shrunk this down to just 2 items for brevity):

You can see the key here in the ‘content’ and ‘filename’ attributes.

 

Azure routing port 25

I’ve known that Azure restricts outbound port 25 within its cloud for security reasons, but today I learned a little more granularity to that.

As a baseline, here’s a configuration I’m using in Azure right now, that is functional:

This is two virtual networks, connected with a VNET Peer, with each subnet protected by a Network Security Group allowing port 25 traffic. Clients send un-authenticated SMTP to an internal relay over port 25, and then the relay is configured to send outbound authenticated over port 587 with TLS.

 

Now here is a configuration that doesn’t work:

This is two virtual networks, connected by a site-to-site VPN tunnel established through a network virtual appliance (in this case, a VeloCloud, but it could be anything).

Then in order to get traffic to the NVA, we use a User-Defined-Route, on a route table, with the NVA LAN interface set as the next hop IP address.

In this scenario, our internal relay is configured to use port 25 to a different relay on the other side of the tunnel.

From what I can tell (because there aren’t diagnostic flow logs like the NSG), the traffic gets blocked at the UDR, because it never hits my NVA. However, other types of traffic (SSH, HTTPS, port 587) have no problem being routed and received.

Its very interesting to me that the native routing of Azure vnets will allow port 25 traffic, but not UDR’s.

Terraform deploy Azure App Service with dotnet core stack

Terraform doesn’t yet natively have a method to set the “Stack” version of an Azure App Service to dot net Core.

This limitation is described in an issue against the AzureRm provider.

I’m not well versed in this area of Azure yet, but my understanding is that you can achieve dotnet core support by using the .NET stack, and then adding the .Net Core runtime extension:

I’m successfully running an ASP.NET Blazor app on dotnet Core 3.1, deployed through Terraform, in this manner.

However this means your app services is loading up the dotnet 4 runtime, AND the dotnet core runtime as an extension, which will have a small impact on the memory footprint.

In order to get the Stack set on dotnet Core without having to set it manually, we can use an ARM template deployment within Terraform. This was originally sourced from this stackoverflow answer.

Here’s my example on GitHub, rather than embedding code inline (it’s a little long):

GitHub Example: AppService-DotNetCore.tf

This set of code deploys the app service plan and app service (as the free tier), and then an ARM template deployment which sets the Stack as .NET Core, as well as adding an extension for the .NET Core logging.

Here’s my Blazor app, running on .NET Core stack!

docker-compose environment variables and quotes

Today I am learning about using docker-compose to run a simple dotnet core Blazor server app, and I hit a snag.

For various reasons I won’t detail right now, I want my docker container to serve my app up over HTTPS, and this requires a bit of extra configuration for dotnet core.

After producing a certificate, I managed to get my container running with a a “docker run”, like this:

docker run --rm -p 44381:443 -e ASPNETCORE_HTTPS_PORT=44381 -e ASPNETCORE_URLS="https://+;http://+" -e Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -e Kestrel__Certificates__Default__Password=password -v $env:USERPROFILE\.aspnet\https:/https/ samplewebapp-blazor
No problems, I could hit https://localhost:44381 and it all worked great.
However, that’s messy and I wanted to experiment with docker-compose yml files to clean it up a bit. I produced this:
version: "3.8"
services:
  web:
    image: samplewebapp-blazor
    ports:
      - "44381:443"
    environment:
      - COMPOSE_CONVERT_WINDOWS_PATHS=1
      - ASPNETCORE_HTTPS_PORT=44381
      - ASPNETCORE_URLS="https://+;http://+"
      - Kestrel__Certificates__Default__Password="password"
      - Kestrel__Certificates__Default__Path="/https/aspnetapp.pfx"
    volumes:
      - "/c/Users/jeff.miles/.aspnet/https:/https/"
Then, I run “docker-compose up”. However, instead of success, I saw errors!
crit: Microsoft.AspNetCore.Server.Kestrel[0]
web_1  |       Unable to start Kestrel.
web_1  | Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file

My first thought was, “That’s got to be referring to the certificate – I must not have the volume syntax correct, and it isn’t mounted”. So I messed around with a bunch of different ways of specifying the local mount point, investigated edge cases with WSL2 and Docker Desktop, and wasted about 45 minutes with no results.

So I tagged in my buddy Matthew for his insight, and his first suggestion was “is it actually mounted?” In order to check, I had to get the container to run with docker-compose, so I commented out the environment variables for ASPNETCORE_URLS, and the Kestral values. This allowed the container to run, although I couldn’t actually hit the web app.

Then I was able to do: “docker exec -it containername bash”

Using this I could browse the filesystem, and verify the volume was mounted and the certificate was present.

Within that bash prompt, I manually set the environment variables, and then re-ran dotnet with the same entrypoint command as what builds my docker image. Surprisingly, the application loaded up successfully!

This tells me the volume is good, but something’s wrong with the passed-in variables.

First, I tried taking the quotes off the value of the Kestrel__Certificates__Default__Path variable. But then docker-compose gave me this error:

web_1  | crit: Microsoft.AspNetCore.Server.Kestrel[0]
web_1  |       Unable to start Kestrel.
web_1  | System.InvalidOperationException: Unrecognized scheme in server address '"https://+""'. Only 'http://' is supported.

I decided to remove all quotes from all environment variables (as a shot in the dark), and again surprisingly, it worked!

A bit of internet sleuthing later, and Matthew had produced this GitHub issue as explanation of what was going on.

Because I was wrapping the environment variables in quotes, they were actually getting injected into the container with quotes!

Here’s the end result of my compose file:

version: "3.8"
services:
  web:
    image: samplewebapp-blazor
    ports:
      - "44381:443"
    environment:
      - COMPOSE_CONVERT_WINDOWS_PATHS=1
      - ASPNETCORE_HTTPS_PORT=44381
      - ASPNETCORE_URLS=https://+;http://+
      - Kestrel__Certificates__Default__Password=password
      - Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
    volumes:
      - "/c/Users/jeff.miles/.aspnet/https:/https/"

It looks like as of docker-compose 1.26 (out now) that if you need quotes around environment variable values, you should use a .env file, which will work properly.

Azure Automation and DSC inside Pipeline

For a while I have been integrating Terraform resource deployment of Azure VMs with Azure Desired State Configuration inside of them (previous blog post).

Over time my method for deploying the Azure resources to support the DSC configurations has matured into a PowerShell script that checks and creates per-requisites, however there was still a bunch of manual effort to go through, including the creation of the Automation Run-As account.

This was one of the first things I started building in an Azure DevOps pipeline; it was a good idea having now spent a bunch of time getting it working, and learned a bunch too.

Some assumptions are made here. You have:

  • an Azure DevOps organization to play around with
  • an Azure subscription
  • the capability/authorization to create new service principals in the Azure Active Directory associated with your subscription.

You can find the GitHub repository containing the code here. with the README file containing the information required to use the code and replicate it.

A few of the key considerations that I wanted to include (and where they are solved) were:

  • How do I automatically create a Run-as account for Azure Automation, when it is so simple in the portal (1 click!)
    • Check the New-RunAsAccount.ps1 file, which gets called by the Create-AutomationAccount.ps1 script in a pipeline
  • Azure Automation default modules are quite out of date, and cause problems with using new DSC resources and syntax. Need to update them.
  • I make use of Az PowerShell in runbooks, so I need to add those modules, but Az.Accounts is a pre-req for the others, so it must be handled differently
    • Create-AutomationAccount.ps1 has a section to do this for Az.Accounts, and then a separate function that is called to import any other modules from the gallery that are needed (defined in the parameters file)
  • Want to use DSC composites, but need a mechanism of uploading that DSC module as an Automation Account module automatically
    • A separate pipeline found on “ModuleDeploy-pipeline.yml” is used with tasks to achieve this
  • Don’t repeat parameters between scripts or files – one location where I define them, and re-use them
    • See “dsc_parameters.ps1”, which gets dot-sourced in the scripts which are directly called from pipeline tasks

Most importantly, are the requirements to get started, being:

  • An Azure Subscription in which to deploy resources
  • An Azure KeyVault that will be used to generate certificates
  • An Azure Storage Account with a container, to store composite module zip
  • An Azure DevOps organization you can create pipelines in
  • An Azure Service Principal with the following RBAC or API permissions: (so that it can itself create new service principals)
    • must be “Application Administrator” on the Azure AD tenant
    • must be “Owner” on the subscription
    • must have appropriate rights to an access policy on the KeyVault to generate and retrieve Certificates
    • must be granted the following API permissions within the Azure Active Directory:
  • An Azure DevOps Service Connection linked to the Service Principal above

 

The result is that we have two different pipelines which can do the following:

  • ModuleDeploy-pipeline.yml pipeline runs and
    • takes module from repository and creates a zip file
    • uploads DSC composite module zip to blob storage
    • creates automation account if it doesn’t exist
    • imports DSC composite module to automation account from blob storage (with SAS)
  • azure-pipelines.yml pipeline runs and:
    • creates automation account if it doesn’t exist
    • imports/updates Az.Accounts module
    • imports/updates remaining modules identified in parameters
    • creates new automation runas account (and required service principal) if it doesn’t exist (generating an Azure KeyVault certificate to do so)
    • performs a ‘first-time’ run of the “Update-AutomationAzureModulesForAccount” runbook (because automation account is created with out-of-date default modules)
    • imports DSC configuration
    • compiles DSC configuration against configuration data