IIS applications and virtual directories with PowerShell

I’m currently building a container on a Windows Server Core base image with IIS. The intention will be to run this within Azure Kubernetes Service (AKS), on Windows node pools.

A very useful resource in understanding the IIS concepts discussed in this post comes from Octopus: https://octopus.com/blog/iis-powershell#recap-iis-theory

One of the challenges I’m working with is the desire to meet both these requirements:

  • Able to always place our application in a consistent and standard path (like c:\app)
  • Need to be able to serve the app behind customizable virtual paths
    • For example, /env/app/webservice or /env/endpoint
    • These virtual paths should be specified at runtime, not in the container build (to reduce the number of unique containers)
    • A unique domain cannot be required for each application

One of the thoughts is that while testing the application locally, I want to be able to reach the application at the root path (i.e. http://localhost:8080/) but when put together in the context of a distributed system, I want to serve this application behind a customizable path.

In AKS, using the ingress-nginx controller, I can use the “rewrite-target” annotation in order to have my ingress represent the virtual path while maintaining the application at the root of IIS in the container. However, this quickly falls down when various applications are used that might have non-relative links for stylesheets and javascript includes.

One idea was to place the application in the root (c:\inetpub\wwwroot) and then add a new Application on my virtual path pointing to the same physical path. However, this caused problems with duplicate web.config being recognized because it was picked up from the physical path at the root Application and my virtual path Application. This could be mitigated in the web.config with the use of “<location inheritInChildApplications=”false”>” tags, but I also realized I don’t need BOTH requirements to be available at the same time. If a variable virtual path is passed into my container, I don’t need the application served at the root.

With this in mind, I set about creating logic like this:

  1. In the Dockerfile, place the application at c:\app
  2. If the environment variable “Virtual Path” exists
    1. Create an IIS Application pointing at the supplied Virtual Path, with a physical path of c:\app
  3. else
    1. Change the physical path of “Default Web Site” to c:\app

I tested this in the GUI on a Windows Server 2019 test virtual machine, and it appeared to work for my application just fine. However, when I tested using PowerShell (intending to move functional code into my docker run.ps1 script), unexpected errors occurred.

Here’s what I was attempting:

New-WebVirtualDirectory -Name "envtest/app1/webservice" -Site "Default Web Site" -PhysicalPath "C:\inetpub\wwwroot"

And here is the error it produced for me:

The view at ‘~/Views/Home/Index.cshtml’ must derive from WebViewPage, or WebViewPage

Interestingly, displaying straight HTML within this virtual path for the Application works just fine – it is only the MVC app that has an error.

The application I’m testing with is a dotnet MVC application, but none of the common solutions to this problem are relevant – the application works just fine at the root of a website, just not when applied under a virtual path.

Using the context from the Octopus link above, I began digging a little deeper and testing. Primarily targeting the ApplicationHost.config file located at “C:\windows\system32\inetsrv\Config”.

When I manually created my pathing in the GUI that was successful (creating each virtual subdir), the structure within the Site in this config file looked like this:

<site name="Default Web Site" id="1">
    <application path="/">
        <virtualDirectory path="/" physicalPath="%SystemDrive%\inetpub\wwwroot" />
		<virtualDirectory path="/envtest" physicalPath="%SystemDrive%\inetpub\wwwroot" />
		<virtualDirectory path="/envtest/app1" physicalPath="%SystemDrive%\inetpub\wwwroot" />
    </application>
    <application path="envtest/app1/webservice" applicationPool="DefaultAppPool">
        <virtualDirectory path="/" physicalPath="C:\inetpub\wwwroot" />
    </application>
    <bindings>
        <binding protocol="http" bindingInformation="*:80:" />
    </bindings>
    <logFile logTargetW3C="ETW" />
</site>

However, when I used the PowerShell example above, this is what was generated:

<site name="Default Web Site" id="1">
    <application path="/">
        <virtualDirectory path="/" physicalPath="%SystemDrive%\inetpub\wwwroot" />
    </application>
    <application path="envtest/app1/webservice" applicationPool="DefaultAppPool">
        <virtualDirectory path="/" physicalPath="C:\inetpub\wwwroot" />
    </application>
    <bindings>
        <binding protocol="http" bindingInformation="*:80:" />
    </bindings>
    <logFile logTargetW3C="ETW" />
</site>

It seems clear that while IIS can serve content under the virtual path I created, MVC doesn’t like the missing virtual directories.

 

When I expanded my manual PowerShell implementation to look like this, then the application began to work without error:

New-WebVirtualDirectory -Name "/envtest" -Site "Default Web Site" -PhysicalPath "C:\inetpub\wwwroot"
New-WebVirtualDirectory -Name "/envtest/app1" -Site "Default Web Site" -PhysicalPath "C:\inetpub\wwwroot"
New-WebApplication -Name "/envtest/app1/webservice" -PhysicalPath "C:\app\" -Site "Default Web Site" -ApplicationPool "DefaultAppPool"

I could then confirm that my ApplicationHost.config file matched what was created in the GUI.

 

The last piece of this for me was turning a Virtual Path environment variable that could contain any kind of pathing, into the correct representation of IIS virtual directories and applications.

Here’s an example of how I’m doing that:

if (Test-Path "ENV:VirtualPath")
{
    # Trim the start in case a prefix forwardslash was supplied
    $ENV:VirtualPath = $ENV:VirtualPath.TrimStart("/")
    Write-Host "Virtual Path is passed, will configure IIS web application"
    # We have to ensure the Application/VirtualDirectory in IIS gets created properly in the event of multiple elements in the path
    # Otherwise IIS won't serve some applications properly, like ASP.NET MVC sites

    Import-Module WebAdministration
    # for each item in the Virtual Path, excluding the last Leaf
    foreach ($leaf in 0..($ENV:VirtualPath.Split("/").Count-2)) { # minus 1 for 0-based counting, minus 2 for dropping the last leaf
        if ($leaf -eq 0){
            # Check and see if we're the first index of the VirtualPath, and if so just use it
            $usepath = $ENV:VirtualPath.Split("/")[$leaf]
        } else {
            # If not first index, go through all previous index and concat
            $usepath = [string]::Join("/",$ENV:VirtualPath.Split("/")[0..$leaf])
        }
        New-WebVirtualDirectory -Name "$usepath" -Site "Default Web Site" -PhysicalPath "C:\inetpub\wwwroot" # Don't specify Application, default to root
    }

    # Create Application with the full Virtual Path (making last element effective)
    New-WebApplication -Name "$ENV:VirtualPath" -PhysicalPath "C:\app\" -Site "Default Web Site" -ApplicationPool "DefaultAppPool" # Expect no beginning forward slash
} else {
    # Since no virtual path was passed, we want Default Web Site to point to C:\app
    Set-ItemProperty -Path "IIS:\Sites\Default Web Site" -name "physicalPath" -value "C:\app\"
}

 

AKS StorageClass for Standard HDD managed disk

Today while exploring the Azure Kubernetes Service docs, specifically looking at Storage, I came across a note about StorageClasses:

You can create a StorageClass for additional needs using kubectl

This combined with the description of the default StorageClasses for Managed Disks being Premium and Standard SSD led me to question “what if I want a Standard HDD for my pod?”

This is absolutely possible!

First I took a look at the parameters for an existing StorageClass, the ‘managed-csi’:

While the example provided in the link above uses the old ‘in-tree’ methods of StorageClasses, this gave me the proper Provisioner value to use the Cluster Storage Interface (CSI) method.

I created a yaml file with these contents:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: managed-csi-hdd
provisioner: disk.csi.azure.com
reclaimPolicy: Retain
allowVolumeExpansion: True
volumeBindingMode: WaitForFirstConsumer
parameters:
  skuname: StandardHDD_LRS

In reality, I took a guess at the “skuname” parameter here, replacing the “StandardSSD_LRS” with “StandardHDD_LRS”. Having used Terraform before with Managed Disk sku’s I figured this wasn’t going to be valid, but I wanted to see what happened.

Then I performed a ‘kubectl apply -f filename.yaml’ to create my StorageClass. This worked without any errors.

To test, I created a PersistentVolumeClaim, and then a simple Pod, with this yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-hdd-disk
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-csi-hdd
  resources:
    requests:
      storage: 5Gi
---
kind: Pod
apiVersion: v1
metadata:
  name: teststorage-pod
spec:
  nodeSelector:
        "kubernetes.io/os": linux
  containers:
    - name: teststorage
      image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
      volumeMounts:
      - mountPath: "/mnt/azurehdd"
        name: hddvolume
  volumes:
    - name: hddvolume
      persistentVolumeClaim:
        claimName: test-hdd-disk

After applying this with kubectl, my PersistentVolumeClaim was in a Pending state, and the Pod wouldn’t create. I looked at the Events of my PersistentVolumeClaim, and found an error as expected:

This is telling me my ‘skuname’ value isn’t valid and instead I should be using a supported type like “Standard_LRS”.

Using kubectl I deleted my Pod, PersistentVolumeClaim, and StorageClass, modified my yaml, and re-applied.

This time, the claim was created successfully, and a persistent volume was dynamically generated. I can see that disk created as the correct type in the Azure Portal listing of disks:

The Supported Values in that error message also tells me I can create ZRS-enabled StorageClasses, but only for Premium and StandardSSD managed disks.

Here’s the proper functioning yaml for the StorageClass, with the skuname fixed:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: managed-csi-hdd
provisioner: disk.csi.azure.com
reclaimPolicy: Retain
allowVolumeExpansion: True
volumeBindingMode: WaitForFirstConsumer
parameters:
  skuname: Standard_LRS

 

AzCopy with Packer out of memory

One of my Packer builds for a Windows image is using AzCopy to download files from Azure blob storage. In some circumstances I’ve  had issues where the AzCopy “copy” command fails with a Go error, like this:

2022/01/06 10:00:02 ui:     hyperv-vmcx: Job e1fcf7c7-f32e-d247-79aa-376ef5d49bd6 has started
2022/01/06 10:00:02 ui:     hyperv-vmcx: Log file is located at: C:\Users\cxadmin\.azcopy\e1fcf7c7-f32e-d247-79aa-376ef5d49bd6.log
2022/01/06 10:00:02 ui:     hyperv-vmcx:
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: runtime: VirtualAlloc of 8388608 bytes failed with errno=1455
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: fatal error: out of memory
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx:
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: runtime stack:
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: runtime.throw(0xbeac4b, 0xd)
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: 	/opt/hostedtoolcache/go/1.16.0/x64/src/runtime/panic.go:1117 +0x79
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: runtime.sysUsed(0xc023d94000, 0x800000)
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: 	/opt/hostedtoolcache/go/1.16.0/x64/src/runtime/mem_windows.go:83 +0x22e
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: runtime.(*mheap).allocSpan(0x136f960, 0x400, 0xc000040100, 0xc000eb9b00)
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: 	/opt/hostedtoolcache/go/1.16.0/x64/src/runtime/mheap.go:1271 +0x3b1
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: runtime.(*mheap).alloc.func1()
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: 	/opt/hostedtoolcache/go/1.16.0/x64/src/runtime/mheap.go:910 +0x5f
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: runtime.systemstack(0x0)
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: 	/opt/hostedtoolcache/go/1.16.0/x64/src/runtime/asm_amd64.s:379 +0x6b
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: runtime.mstart()
2022/01/06 10:00:06 ui error: ==> hyperv-vmcx: 	/opt/hostedtoolcache/go/1.16.0/x64/src/runtime/proc.go:1246

Notice the “fatal error: out of memory” there.

I had already set the AzCopy environment variable AZCOPY_BUFFER_GB to 1GB, and I also increased my pagefile size (knowing Windows doesn’t always grow it upon demand reliably) but these didn’t improve it.

Then I stumbled upon this GitHub issue from tomconte: https://github.com/Azure/azure-storage-azcopy/issues/781

I added this into my Packer build before AzCopy gets called, and it seems to have resolved my problem.

AKS Windows Node problem after 1.22 upgrade

Here’s a bit of a troubleshooting log as I worked through an experimental cluster in Azure Kubernetes Service (AKS).

As a starting point, my cluster was on K8s version 1.21.4, with one node pool of “system” type on Linux, and one nodepool of “user” type on Windows.

I performed an upgrade to 1.22.4, upgrading both the cluster and the nodepools.

Following this I had 2 issues appear in the Azure Portal for my node pools:

  1. The Linux node pool only rebuilt one of the Virtual Machine Scale Set (VMSS) instances to run 1.22.4 – the other instance was still running 1.21.4 when I viewed the node list in the Portal or with kubectl.
  2. The Windows node pool displayed a node count of 3, but it also showed “0/0 ready” with NO instances in the node list.

Problem #1 was solved by scaling down the pool to 1 instance, and then scaling back to 2. AKS removed and re-created the VMSS instance properly and it all looked good.

Problem #2 was harder – kubectl didn’t see the nodes at all, but I did find the VMSS with the correct number of instances and they appeared healthy (as far as Virtual Machines go). Performing scaling operations on the node pool through AKS affected the VMSS properly (scaling right down to zero even) however these actions didn’t resolve the problem of kubectl not knowing the nodes existed.

I’m coming into both AKS and Kubernetes pretty blind and ignorant, so I began looking at how I could get onto the Nodes themselves and dig through some logs.

This Microsoft Doc talks about viewing the kubelet logs, using an SSH connection to your nodes through a debug container. However, this didn’t work for me because I didn’t have the original SSH keys from cluster setup, and even though I reset the Windows Node credentials (az aks update –resource-group $RESOURCE_GROUP –name  $CLUSTER_NAME –windows-admin-password $NEW_PW) I still received public key errors attempting to SSH.

Instead, I dropped a new VM into the virtual network with a public IP, and gave myself RDP access to this as a jump host. From here, I could perform RDP directly into my Windows Nodes, as well as SMB access to \\nodeIP\c$.

This let me look at this path: c:\k\kubelet.log

Where I found this error:

E1223 15:50:40.001852 4532 server.go:194] "Failed to validate kubelet flags" err="the DynamicKubeletConfig feature gate must be enabled in order to use the --dynamic-config-dir flag"

Exception calling "RunProcess" with "3" argument(s): "Cannot process request because the process (4532) has exited."
At C:\k\kubeletstart.ps1:266 char:1
+ [RunProcess.exec]::RunProcess($exe, $args, [System.Diagnostics.Proces ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : InvalidOperationException

I also found errors in the file c:\k\kubeproxy.err.log about missing processes azure-vnet.exe and azure-vnet-ipam.exe.

I did a bunch of reading about Troubleshooting Kubernetes Networking on Windows, and ran “hnsdiag list all” through that process, discovering it had zero entries.

At this point, I spun up a new Windows node pool to use as comparison. Here’s a couple things I found:

  • c:\k\config was missing on my broken node
  • “hnsdiag list all” produced lots of output on a good node, and virtually empty on my bad node
  • The good node had a lot of extra files in C:\k\ related to azure-vnet and azure-vnet-ipam

I began looking into the error listed above, specifically around “the DynamicKubeletConfig feature gate must be enabled”. My searching to this K8s page on dynamic kubelet configuration, stating it as deprecated on 1.22.

Now I wanted to find where that feature flag was coming from.

The Kubelet process runs as a service on these Windows nodes:

I wanted to see what executable these were actually running, which you can do with this command:

Get-WmiObject win32_service | ?{$_.Name -like '*kube*'} | select Name, DisplayName, State, PathName

Interesting, it is using NSSM. Luckily I’m familiar with that for running Windows services, and you can inspect the config  for a service like this:

.\nssm dump kubelet

Ok so the Kubelet is a Powershell script: c:\k\kubeletstart.ps1.

I opened that file and started digging. Right away it became apparent where this “DynamicKubeletConfig” flag as an argument on the kubelet service was coming from.

The first line pulls in ClusterConfiguration from a file, and then on line 35 that is turned into $KubeletArgList variable:

# Line 1
$Global:ClusterConfiguration = ConvertFrom-Json ((Get-Content "c:\k\kubeclusterconfig.json" -ErrorAction Stop) | out-string)
# Skip a bunch of stuff until line 35:
$KubeletArgList = $Global:ClusterConfiguration.Kubernetes.Kubelet.ConfigArgs # This is the initial list passed in from aks-engine

dfa

I can inspect this PowerShell variable and see the flag added there. Now I compare the C:\k\kubeclusterconfig.json” file between my good and bad nodes, and find that is the only difference between the two!

I removed that line and saved the file, and then forced a restart of the Kubelet and KubeProxy services.

It appeared to work! Now kubectl and Azure Portal recognize my node, the C:\k\config file and c:\k\azure-vnet.* files were auto-generated, and my pods started being scheduled properly.

Now my question is, “how come this file didn’t get updated properly to remove the flag, and why did this continue to be an issue every time I scaled a new instance in the VMSS?”.

With 1 working node, I scaled my node pool to a count of 2. What I expected was that the count would recognize as 2 but it would say “1/1 ready” with only a single node still listed from ‘kubectl get nodes’. I am assuming that however this config is stored for the VMSS, editing the file on a single running instance doesn’t update it for all of them.

And that is exactly what happened:

That is the next thread I’ll be pulling on, and will post an update to this when I find out more.

Update – 2022-01-05

I’ve received information from Microsoft Support that this is an internal (non-public) bug of “Nodes failed to register with API server after upgrading to 1.22 AKS version” that the AKS team is working on. However, I’m told that even after a fix has been rolled out; I will need to recreate a new node-pool to resolve this issue – it won’t be back-ported to current node-pools.

 

 

DSC disk resource failing due to defrag

I worked through an interesting problem today occurring with Desired State Configuration tied into Azure Automation.

In this scenario, Azure Virtual Machines are connected to Azure Automation for Desired State Configuration, being configured with a variety of resources. One of them is failing, the “Disk” resource, although it was previously working in the past.

The PowerShell DSC resource ‘[Disk]EVolume’ with SourceInfo ‘::1208::13::Disk’ threw one or more non-terminating errors while running the Test-TargetResource functionality. These errors are logged to the ETW channel called Microsoft-Windows-DSC/Operational. Refer to this channel for more details.

I need more detail, so lets see what the interactive run of DSC on the failing virtual machine. While I can view the logs located in “C:\Windows\System32\Configuration\ConfigurationStatus”, I found that in this case, this doesn’t reveal any additional detail beyond what the Azure Portal does.

I run DSC interactively with this command:

Invoke-CimMethod -CimSession $env:computername -Name PerformRequiredConfigurationChecks -Namespace root/Microsoft/Windows/DesiredStateConfiguration -Arguments @{Flags=[Uint32]2} -ClassName MSFT_DscLocalConfigurationManager -Verbose

Now we can see the output of this resource in DSC better:

Invoke-CimMethod : Invalid Parameter
Activity ID: {aab6d6cd-1125-4e9c-8c4e-044e7a14ba07}
At line:1 char:1
+ Invoke-CimMethod -CimSession $env:computername -Name PerformRequiredC ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (StorageWMI:) [Invoke-CimMethod], CimException
    + FullyQualifiedErrorId : StorageWMI 5,Get-PartitionSupportedSize,Microsoft.Management.Infrastructure.CimCmdlets.InvokeCimMethodCommand

This isn’t very useful on it’s own, but the error does lead to an issue logged against the StorageDSC module that is directly related:

https://github.com/dsccommunity/StorageDsc/issues/248

The “out-of-resource” test that ianwalkeruk provides reproduces the error on my system:

$partition = Get-Partition -DriveLetter 'E' | Select-Object -First 1
$partition | Get-PartitionSupportedSize

Running this on my system produces a similar error:

There does happen to be a known issue on the Disk resource in GitHub with Get-PartitionSupportedSize and the defragsvc:

https://github.com/dsccommunity/StorageDsc/wiki/Disk#defragsvc-conflict

Looking at the event logs on my VM, I can see that the nightly defrag from the default scheduled task has been failing:

The volume Websites (E:) was not optimized because an error was encountered: The parameter is incorrect. (0x80070057)

Looking at the docs for Get-PartitionSupportedSize, there is a note that says “This cmdlet starts the “Optimize Drive” (defragsvc) service.”

Based on timing of events, it appears like defrag hasn’t been able to successfully complete in a long time, because it’s duration is longer than the DSC refresh interval – when DSC runs and eventually triggers Get-PartitionSupportedSize, it aborts the defrag. Even running this manually I can see this occur:

The user cancelled the operation. (0x890000006)

At this point, I don’t know what it is about a failed defrag state that is causing Get-PartitionSupportedSize to fail with “Invalid Parameter” – even when defrag isn’t running that cmdlet fails.

However, in one of my systems with this problem, if I ensure that the defrag successfully finishes (by manually running it after each time DSC kills it, making incremental progress), then we can see Get-PartitionSupportedSize all of a sudden succeed!

And following this, DSC now succeeds!

So if you’re seeing “Invalid Parameter” coming from Get-PartitionSupportedSize, make sure you’ve got successful Defrag happening on that volume!