Terraform and Azure DNS apex A record

I have a use case for an Azure DNS Private Zone, with an apex A record. For example, I have the name “test.domain.com” and for the VNET that I link to my private zone, I want it to ONLY resolve “test” for domain.com, but go out to the DNS hierarchy for any other records within “domain.com”.

This can be created directly in the Azure portal, by leaving the “Name” field empty when creating a record set. This will produce an apex record, like this:

I want to deploy this through Terraform, so I first tried to leave an empty string in the Name property (because Name is a required field on the AzureRM provider):

resource "azurerm_private_dns_a_record" "test-domain-com-apex" {
    name                = ""
    zone_name           = azurerm_private_dns_zone.test-domain-com.name
    resource_group_name = azurerm_resource_group.shared-rg.name
    ttl                 = 300
    records             = ["10.9.3.230"]
}

However, AzureRM provider doesn’t like that:

So then I went to the Portal, and did an “Export Template” to view the ARM resource natively. Here I found a syntax that appeared to be “zone-name/@”.

I tried this in Terraform:

resource "azurerm_private_dns_a_record" "test-domain-com-apex" {
    name                = "${azurerm_private_dns_zone.test-domain-com.name}/@"
    zone_name           = azurerm_private_dns_zone.test-domain-com.name
    resource_group_name = azurerm_resource_group.shared-rg.name
    ttl                 = 300
    records             = ["10.9.3.230"]
}

However, this wasn’t valid and produced strange output:

Next I tried just the @ symbol:

resource "azurerm_private_dns_a_record" "test-domain-com-apex" {
    name                = "@"
    zone_name           = azurerm_private_dns_zone.test-domain-com.name
    resource_group_name = azurerm_resource_group.shared-rg.name
    ttl                 = 300
    records             = ["10.9.3.230"]
}

This worked!

Now I can selectively resolve specific FQDNs within my VNET without having to worry about records outside that scope.

Terraform nested for_each example

Today I needed a double for_each in my Terraform configuration; the ability to for_each over one thing, and at the same time for_each over another thing.

Here’s the context:

I want to produce two Azure Private DNS Zones, with records inside each of them, but conditionally. Think of it as ‘zones’ – zone A and zone B will be unique in their identifiers, but have commonalities in the IP addresses used.

I want do to this conditionally (a zone may not always exist) but also without repeating myself in code.

Lets start with a variable Map of my zones:

variable "zoneversions" {
  default = {
        "zonea" = {
            "zonename" = "a",
            "first3octets" = "10.9.3"
        },
        "zoneb" = {
            "zonename" = "b",
            "first3octets" = "10.9.4"
        }
    }
}

Here I’m creating an object that will work with for_each syntax. You’ll note I’m including additional attributes that are unique to each zone – this will come in handy later.

This variable allows me to create my Azure DNS private zones like this:

resource "azurerm_private_dns_zone" "zones-privatedns" {
  for_each            = var.zoneversions
  name                = "${each.value.zonename}.domain.com"
  resource_group_name = azurerm_resource_group.srv-rg.name
  }
}

This is using the “each.value” syntax, referencing the attributes of each zone. This terraform will produce the Private DNS zones described in the image above.

Now I want to populate each zone with records.
First, I’m going to use a local variable (could be a regular variable too) that will create a map of keys (common parts of server names) and values (last octet of the ip addresses):

locals {
  ipaddresses = {
    web                = ".3"
    rdp                = ".4"
    dc                 = ".10"
    db                 = ".11"
  }
}

For each zone that I have (a or b), I want to create a DNS record for each key in this map (hence the double for_each). Terraform won’t let you combine a for_each and count, and it doesn’t natively support 2 for_each expressions.

After a lot of trial and error (using terraform console to test) I came up with the code below. This article with a post by ‘apparentlysmart’ was a big help in the final task and helped me understand the structure of what I was trying to build.

I need 2 new local variables. The first will produce a flattened list of the combinations I’m looking for. And then since for_each only interacts with maps, I need a second local to convert it into that object type.

zonedips-list = flatten([ # Produce a list of maps, containing a name and IP address for each zone we specify in our variable
    for zones in var.zoneversions: [
      for servername,ips in local.ipaddresses: {
        zonename = "${zones.zonename}"
        name = "${zones.zonename}${servername}"
        ipaddress = "${zones.first3octets}${ips}"
      }
    ]
  ])
 
  zonedips-map = { # Take the list, and turn it into a map, so we can use it in a for_each
    for obj in local.zonedips-list : "${obj.name}" => obj # this means set the key of our new map to be $obj.name (hfx23-ti-web1) and => means keep the attributes of the object the same as the original
  }

Then I can use that second local when defining a single “azurerm_private_dns_a_record” resource:

resource "azurerm_private_dns_a_record" "vm-privaterecords" {
  for_each            = local.zonedips-map
  name                = each.value.name
  zone_name           = azurerm_private_dns_zone.zones-privatedns[each.value.zonename].name
  resource_group_name = azurerm_resource_group.srv-rg.name
  ttl                 = 300
  records             = [each.value.ipaddress]
}

This is where the magic happens. Because my map “zonedips-map” has attributes for each object, I can reference them with the ‘each.value’ syntax. So the name field of my DNS record will be equivalent to “${zones.zonename}${servername}”, or “aweb/bweb” as the for_each iterates. To place these in the correct zone, I’m using index selection on the resource, within the “zone_name” attribute – this says refer to the private_dns_zone with the terraform identifier “zones-privatedns” but an index (since there are multiple) that matches my version name.

This is where terraform console comes in real handy; I can produce a simple terraform config (without an AzureRM provider) that contains these items, with either outputs, or a placeholder resource (like a file).

For example, take the terraform configuration below, do a “terraform init” on it, and then “terraform console” command.

terraform {
  backend "local" {
  }
}
 
locals {
  zonedips-list = flatten([
    for zones in var.zoneversions: [
      for servername,ips in local.ipaddresses: {
        zonename = "${zones.zonename}"
        name = "${zones.zonename}${servername}"
        ipaddress = "${zones.first3octets}${ips}"
      }
    ]
  ])
 
  zonedips-map = {
    for obj in local.zonedips-list : "${obj.name}" => obj
  }
 
  ipaddresses = {
    web                = ".3"
    rdp                = ".4"
    dc                 = ".10"
    db                 = ".11"
  }
}
 
variable "zoneversions" {
  default = {
        "zonea" = {
            "zonename" = "a",
            "first3octets" = "10.9.3"
        },
        "zoneb" = {
            "zonename" = "b",
            "first3octets" = "10.9.4"
        }
    }
}
 
resource "local_file" "test" {
    for_each = local.zonedips-map
    filename    = each.value.name
    content     = each.value.ipaddress
}

You can then explore and display the contents of the variables or locals by calling them explicitly in the console:

So we can display the contents of our flattened list:

And then the produced map:

 

Finally, we can do a “terraform plan”, and look at the file resources that would be created (I shrunk this down to just 2 items for brevity):

You can see the key here in the ‘content’ and ‘filename’ attributes.

 

Terraform deploy Azure App Service with dotnet core stack

Terraform doesn’t yet natively have a method to set the “Stack” version of an Azure App Service to dot net Core.

This limitation is described in an issue against the AzureRm provider.

I’m not well versed in this area of Azure yet, but my understanding is that you can achieve dotnet core support by using the .NET stack, and then adding the .Net Core runtime extension:

I’m successfully running an ASP.NET Blazor app on dotnet Core 3.1, deployed through Terraform, in this manner.

However this means your app services is loading up the dotnet 4 runtime, AND the dotnet core runtime as an extension, which will have a small impact on the memory footprint.

In order to get the Stack set on dotnet Core without having to set it manually, we can use an ARM template deployment within Terraform. This was originally sourced from this stackoverflow answer.

Here’s my example on GitHub, rather than embedding code inline (it’s a little long):

GitHub Example: AppService-DotNetCore.tf

This set of code deploys the app service plan and app service (as the free tier), and then an ARM template deployment which sets the Stack as .NET Core, as well as adding an extension for the .NET Core logging.

Here’s my Blazor app, running on .NET Core stack!

Terraform AzureRM provider 2.0 upgrade

Today I needed to upgrade a set of Terraform configuration to the AzureRM 2.0 provider (technically 2.9.0 as of this writing). My need is primarily to get some bug fixes regarding Application Gateway and SSL certificates, but I knew I’d need to move sooner or later as any new resources and properties are being developed on this new major version.

This post will outline my experience and some issues I faced; they’ll be very specific to my set of configuration but the process may be helpful for others.

To start, I made sure I had the following:

  • A solid source-control version that I could roll back to
  • A snapshot of my state file (sitting in an Azure storage account)

Then, within my ‘terraform’ block where I specify the backend and required versions, I updated the value for my provider version:

terraform {
  backend "azurerm" {
  }
  required_version = "~> 0.12.16"
  required_providers {
    azurerm = "~> 2.9.0"
  }
}

Next, I ran the ‘terraform init’ command with the upgrade switch; plus some other parameters at command line because my backend is in Azure Storage:

terraform init `
    -backend-config="storage_account_name=$storage_account" `
    -backend-config="container_name=$containerName" `
    -backend-config="access_key=$accountKey" `
    -backend-config="key=prod.terraform.tfstate" `
    -upgrade

Since my required_providers property specified AzureRm at 2.9.0, the upgrade took place:


Now I could run a “terraform validate” and expect to get some syntax errors. I could have combed through the full upgrade guide and preemptively modified my code, but I found relying upon validate to call out file names and line numbers easier.

Below are some of the things I found I needed to change – had to run “terraform validate” multiple times to catch all the items:

Add a “features” property to the provider:

provider "azurerm" {
  subscription_id = var.subscription
  client_id       = "service principal id"
  client_secret   = "service principal secret"
  tenant_id       = "tenant id"
  features {}
}

Remove “network_security_group_id” references from subnets
Modify “address_prefix” to “address_prefixes on subnet, with values as a list

Virtual Machine Extension drops “resource_group”, “location”, and “virtual_machine_name” properties
Virtual Machine Extension requires “virtual_machine_id” property

Storage Account no longer has “enable_advanced_threat_protection” property
Storage Container no longer has “resource_group_name” property

Finally, the resources for Azure Backup VM policy and protection have been renamed – this is outlined in the upgrade guide (direct link).

It was this last one that caused the most problems. Before I had replaced it, my “terraform validate” was crashing on a fatal panic:

Looking in the crash.log, I eventually found an error on line 2572:

2020/05/13 20:30:55 [ERROR] AttachSchemaTransformer: No resource schema available for azurerm_recovery_services_protected_vm.rsv-protect-rapid7console

This reminded me of the resource change in the upgrade guide and I modified it.

Now, “terraform validate” is successful, yay!


Not so fast though – my next move of “terraform plan” failed:

Error: no schema available for azurerm_recovery_services_protected_vm.rsv-protect-rapid7console while reading state; this is a bug in Terraform and should be reported

I knew this was because there was still a reference in my state file, so my first thought was to try a “terraform state mv” command, to update the reference:

 
terraform state mv old_resource_type.resource_name new_resource_type.resource_name
terraform state mv azurerm_recovery_services_protection_policy_vm.ccsharedeus-mgmt-backuppolicy azurerm_backup_policy_vm.ccsharedeus-mgmt-backuppolicy

Of course, it was too much to hope that would work; it error-ed out:

Cannot move
azurerm_recovery_services_protection_policy_vm.ccsharedeus-mgmt-backuppolicy
to azurerm_backup_policy_vm.ccsharedeus-mgmt-backuppolicy: resource types
don't match.

I couldn’t find anything else online about converting a pre-existing terraform state to the 2.0 provider with resources changing like this. And from past experience I knew that Azure Backup didn’t like deleting and re-creating VM protection and policies, so I didn’t want to try a “terraform taint” on the resource.

I decided to take a risk and modify my state file directly (confirmed my snapshot!!)

Connecting to my blob storage container, I downloaded a copy of the state file, and replaced all references of the old resource type with the new resource type.

After replacing the text, I uploaded my modified file back to the blob container, and re-ran “terraform plan”.

This worked! The plan ran successfully, and showed no further changes required in my infrastructure.

Terraform dynamic blocks in resources (Azure Backup Retention Policy)

This post will give an example of using Terraform dynamic blocks within an Azure resource. Typically this is done when you need multiple instances of a nested block within a resource, for example multiple “http_listener” within an Azure Application Gateway.

Here I’m using an Azure Backup retention policy, which as a Terraform resource (note I’m using resource version prior to AzureRM 2.0 provider, which are no longer supported) can have blocks for multiple different retention levels. In this case, we are only looking to have one of each type of nested block (retention_daily, retention_weekly, etc) however there are cases where we want zero instances of a block.
I’m sure there would be a way to simplify this code with a list(map(string)) variable, so that the individual nested blocks I’ve identified here aren’t necessary, however I haven’t yet spent the time to make that simplification.

A single-file example of this code can be found on my GitHub repo here.

 

First, create a Map variable containing the desired policy values:

variable "default_rsv_retention_policy" {
  type = map(string)
  default = {
      retention_daily_count = 14
      retention_weekly_count = 0
      #retention_weekly_weekdays = ["Sunday"]
      retention_monthly_count = 0
      #retention_monthly_weekdays = ["Sunday"]
      #retention_monthly_weeks = [] #["First", "Last"]
      retention_yearly_count = 0
      #retention_yearly_weekdays = ["Sunday"]
      #retention_yearly_weeks = [] #["First", "Last"]
      #retention_yearly_months = [] #["January"]
    }
}

In the example above, this policy will retain 14 daily backups, and nothing else.

If you wanted a more typical grandfather scenario, you could retain weekly backups for 52 weeks every Monday, and monthly backups for 36 months from the first Sunday of each month:

variable "default_rsv_retention_policy" {
  type = map(string)
  default = {
      retention_daily_count = 14
      retention_weekly_count = 52
      retention_weekly_weekdays = ["Sunday"]
      retention_monthly_count = 36
      retention_monthly_weekdays = ["Sunday"]
      retention_monthly_weeks = ['First'] #["First", "Last"]
      retention_yearly_count = 0
      #retention_yearly_weekdays = ["Sunday"]
      #retention_yearly_weeks = [] #["First", "Last"]
      #retention_yearly_months = [] #["January"]
    }
}

Now we’re going to use this map variable within a dynamic function. Check the GitHub link above for the full file, as I’m only going to insert the dynamic block here for explanation. Each of these blocks would go inside the “azurerm_backup_policy_vm” resource.

 

First up is the daily – in my case, I am making an assumption that there will ALWAYS be a daily value, and thus directly using the variable.

#Assume we will always have daily retention
  retention_daily {
    count = var.default_rsv_retention_policy["retention_daily_count"]
  }

Next we will add in the “retention_weekly” block:

dynamic "retention_weekly" {
    for_each = var.default_rsv_retention_policy["retention_weekly_count"] > 0 ? [1] : []
    content {
      count  = var.default_rsv_retention_policy["retention_weekly_count"]
      weekdays = var.default_rsv_retention_policy["retention_weekly_weekdays"]
    }
  }

The “for_each” is using conditional logic, in this format: condition ? true_val : false_val
It is evaluated as: “if the retention_weekly_count” value of our map variable is greater than zero, then provide 1 content block, else provide 0 content block.

From our first example of the variable I provided, since the weekly count is 0, this content block would then not appear in our terraform resource.

In the second example, we did provide a value greater than zero (52) and we also specified a weekday. This is what gets inserted into the content block as the property names and values.

 

Similarly, the monthly retention would look like this:

dynamic "retention_monthly" {
    for_each = var.default_rsv_retention_policy["retention_monthly_count"] > 0 ? [1] : []
    content {
      count  = var.default_rsv_retention_policy["retention_monthly_count"]
      weekdays = var.default_rsv_retention_policy["retention_monthly_weekdays"]
      weeks    = var.default_rsv_retention_policy["retention_monthly_weeks"]
    }
  }

Because of the nature of this Azure resource, there is a new content property named “weeks” which signifies the week of the month to take the backup on (we said “First”).
Using dynamic blocks is an effective way to parameterize your Terraform – even with the example I gave earlier about Azure Application Gateway, in that resource itself there are many other nested resource blocks that this would be useful for, like “disabled_rule_group” within the WAF configuration, or “backend_http_settings”, or “probe”. I would expect the same is true for other Azure load balancers or Front Door configurations.