Use case: There’s a set of files/scripts/templates that I want to keep in sync on a set of servers, but only on-demand.
A few different ways to solve this, but one way following a pattern I’ve used a few times is to have an Azure DevOps pipeline that populates and Azure File Share, and then a separate script deployed on the servers that can on-demand pull in files from the File Share.
The script below is a YAML pipeline for Azure DevOps, that uses an AzurePowerShell task.
The primary issue I had to work-around with this (at least using the Azure PowerShell module, is that the cmdlet “Set-AzStorageFileContent” requires the parent directory to exist; it won’t auto-create it. And unfortunately “New-AzStorageDirectory” has the same problem, not creating directories recursively.
So the PowerShell script below has two sections: first to create all the folders by ensuring each leaf in the path of each distinct folder gets created, and then populating with files.
variables:
storageAccountName: "stg123"
resourcegroupName: "teststorage-rg"
fileShareName: "firstfileshare"
trigger:
branches:
include:
- main
paths:
include: # Only trigger the pipeline on this path in the git repo
- 'FileTemplates/*'
pool:
vmImage: 'windows-latest'
steps:
- task: AzurePowerShell@5
inputs:
azureSubscription: 'AzureSubConnection' #This is the devops service connection name
ErrorActionPreference: 'Stop'
FailOnStandardError: true
ScriptType: 'inlineScript'
inline: |
$accountKey = (Get-AzStorageAccountKey -ResourceGroupName $(resourcegroupName) -Name $(storageAccountName))[0].Value
$ctx = New-AzStorageContext -StorageAccountName $(StorageAccountName) -StorageAccountKey $accountKey
$s = Get-AzStorageShare $(fileShareName) -Context $ctx
# We only want to copy a subset of files in the repo, so we'll set our script location to that path
Set-Location "$(Build.SourcesDirectory)\FileTemplates"
$CurrentFolder = (Get-Item .).FullName
$files = Get-ChildItem -Recurse | Where-Object { $_.GetType().Name -eq "FileInfo"}
# Get all the unique folders without filenames
$folders = $files.FullName.Substring($Currentfolder.Length+1).Replace("\","/") | split-path -parent | Get-Unique
# Create Folders for every possible path
foreach ($folder in $folders) {
if ($folder -ne ""){
$folderpath = ("dbscripts\" + $folder).Replace("\","/") # Create a toplevel folder in front of each path to organize within the Azure Share
$foldersPathLeafs = $folderpath.Split("/")
if ($foldersPathLeafs.Count -gt 1) {
foreach ($index in 0..($foldersPathLeafs.Count - 1)) {
$desiredfolderpath = [string]::Join("/", $foldersPathLeafs[0..$index])
try {
$s.ShareClient.GetDirectoryClient("$desiredfolderpath").CreateIfNotExists()
}
catch {
$message = $_
Write-Warning "That didn't work: $message"
}
}
}
}
}
# Create each file
foreach ($file in $files) {
$path=$file.FullName.Substring($Currentfolder.Length+1).Replace("\","/")
$path = "scripts/"+$path # Create a toplevel folder in front of each path to organize within the Azure Share
Write-output "Writing: $($file.FullName)"
try {
Set-AzStorageFileContent -Share $s.CloudFileShare -Source $file.FullName -Path $path -Force
}
catch {
$message = $_
Write-Warning "That didn't work: $message"
}
}
azurePowerShellVersion: 'LatestVersion'
displayName: "Azure Files Storage Copy"
I’m currently building a container on a Windows Server Core base image with IIS. The intention will be to run this within Azure Kubernetes Service (AKS), on Windows node pools.
One of the challenges I’m working with is the desire to meet both these requirements:
Able to always place our application in a consistent and standard path (like c:\app)
Need to be able to serve the app behind customizable virtual paths
For example, /env/app/webservice or /env/endpoint
These virtual paths should be specified at runtime, not in the container build (to reduce the number of unique containers)
A unique domain cannot be required for each application
One of the thoughts is that while testing the application locally, I want to be able to reach the application at the root path (i.e. http://localhost:8080/) but when put together in the context of a distributed system, I want to serve this application behind a customizable path.
In AKS, using the ingress-nginx controller, I can use the “rewrite-target” annotation in order to have my ingress represent the virtual path while maintaining the application at the root of IIS in the container. However, this quickly falls down when various applications are used that might have non-relative links for stylesheets and javascript includes.
One idea was to place the application in the root (c:\inetpub\wwwroot) and then add a new Application on my virtual path pointing to the same physical path. However, this caused problems with duplicate web.config being recognized because it was picked up from the physical path at the root Application and my virtual path Application. This could be mitigated in the web.config with the use of “<location inheritInChildApplications=”false”>” tags, but I also realized I don’t need BOTH requirements to be available at the same time. If a variable virtual path is passed into my container, I don’t need the application served at the root.
With this in mind, I set about creating logic like this:
In the Dockerfile, place the application at c:\app
If the environment variable “Virtual Path” exists
Create an IIS Application pointing at the supplied Virtual Path, with a physical path of c:\app
else
Change the physical path of “Default Web Site” to c:\app
I tested this in the GUI on a Windows Server 2019 test virtual machine, and it appeared to work for my application just fine. However, when I tested using PowerShell (intending to move functional code into my docker run.ps1 script), unexpected errors occurred.
Here’s what I was attempting:
New-WebVirtualDirectory -Name "envtest/app1/webservice" -Site "Default Web Site" -PhysicalPath "C:\inetpub\wwwroot"
And here is the error it produced for me:
The view at ‘~/Views/Home/Index.cshtml’ must derive from WebViewPage, or WebViewPage
Interestingly, displaying straight HTML within this virtual path for the Application works just fine – it is only the MVC app that has an error.
The application I’m testing with is a dotnet MVC application, but none of the common solutions to this problem are relevant – the application works just fine at the root of a website, just not when applied under a virtual path.
Using the context from the Octopus link above, I began digging a little deeper and testing. Primarily targeting the ApplicationHost.config file located at “C:\windows\system32\inetsrv\Config”.
When I manually created my pathing in the GUI that was successful (creating each virtual subdir), the structure within the Site in this config file looked like this:
It seems clear that while IIS can serve content under the virtual path I created, MVC doesn’t like the missing virtual directories.
When I expanded my manual PowerShell implementation to look like this, then the application began to work without error:
New-WebVirtualDirectory -Name "/envtest" -Site "Default Web Site" -PhysicalPath "C:\inetpub\wwwroot"
New-WebVirtualDirectory -Name "/envtest/app1" -Site "Default Web Site" -PhysicalPath "C:\inetpub\wwwroot"
New-WebApplication -Name "/envtest/app1/webservice" -PhysicalPath "C:\app\" -Site "Default Web Site" -ApplicationPool "DefaultAppPool"
I could then confirm that my ApplicationHost.config file matched what was created in the GUI.
The last piece of this for me was turning a Virtual Path environment variable that could contain any kind of pathing, into the correct representation of IIS virtual directories and applications.
Here’s an example of how I’m doing that:
if (Test-Path "ENV:VirtualPath")
{
# Trim the start in case a prefix forwardslash was supplied
$ENV:VirtualPath = $ENV:VirtualPath.TrimStart("/")
Write-Host "Virtual Path is passed, will configure IIS web application"
# We have to ensure the Application/VirtualDirectory in IIS gets created properly in the event of multiple elements in the path
# Otherwise IIS won't serve some applications properly, like ASP.NET MVC sites
Import-Module WebAdministration
# for each item in the Virtual Path, excluding the last Leaf
foreach ($leaf in 0..($ENV:VirtualPath.Split("/").Count-2)) { # minus 1 for 0-based counting, minus 2 for dropping the last leaf
if ($leaf -eq 0){
# Check and see if we're the first index of the VirtualPath, and if so just use it
$usepath = $ENV:VirtualPath.Split("/")[$leaf]
} else {
# If not first index, go through all previous index and concat
$usepath = [string]::Join("/",$ENV:VirtualPath.Split("/")[0..$leaf])
}
New-WebVirtualDirectory -Name "$usepath" -Site "Default Web Site" -PhysicalPath "C:\inetpub\wwwroot" # Don't specify Application, default to root
}
# Create Application with the full Virtual Path (making last element effective)
New-WebApplication -Name "$ENV:VirtualPath" -PhysicalPath "C:\app\" -Site "Default Web Site" -ApplicationPool "DefaultAppPool" # Expect no beginning forward slash
} else {
# Since no virtual path was passed, we want Default Web Site to point to C:\app
Set-ItemProperty -Path "IIS:\Sites\Default Web Site" -name "physicalPath" -value "C:\app\"
}
Today while exploring the Azure Kubernetes Service docs, specifically looking at Storage, I came across a note about StorageClasses:
You can create a StorageClass for additional needs using kubectl
This combined with the description of the default StorageClasses for Managed Disks being Premium and Standard SSD led me to question “what if I want a Standard HDD for my pod?”
This is absolutely possible!
First I took a look at the parameters for an existing StorageClass, the ‘managed-csi’:
While the example provided in the link above uses the old ‘in-tree’ methods of StorageClasses, this gave me the proper Provisioner value to use the Cluster Storage Interface (CSI) method.
In reality, I took a guess at the “skuname” parameter here, replacing the “StandardSSD_LRS” with “StandardHDD_LRS”. Having used Terraform before with Managed Disk sku’s I figured this wasn’t going to be valid, but I wanted to see what happened.
Then I performed a ‘kubectl apply -f filename.yaml’ to create my StorageClass. This worked without any errors.
To test, I created a PersistentVolumeClaim, and then a simple Pod, with this yaml:
After applying this with kubectl, my PersistentVolumeClaim was in a Pending state, and the Pod wouldn’t create. I looked at the Events of my PersistentVolumeClaim, and found an error as expected:
This is telling me my ‘skuname’ value isn’t valid and instead I should be using a supported type like “Standard_LRS”.
Using kubectl I deleted my Pod, PersistentVolumeClaim, and StorageClass, modified my yaml, and re-applied.
This time, the claim was created successfully, and a persistent volume was dynamically generated. I can see that disk created as the correct type in the Azure Portal listing of disks:
The Supported Values in that error message also tells me I can create ZRS-enabled StorageClasses, but only for Premium and StandardSSD managed disks.
Here’s the proper functioning yaml for the StorageClass, with the skuname fixed:
One of my Packer builds for a Windows image is using AzCopy to download files from Azure blob storage. In some circumstances I’ve had issues where the AzCopy “copy” command fails with a Go error, like this:
I had already set the AzCopy environment variable AZCOPY_BUFFER_GB to 1GB, and I also increased my pagefile size (knowing Windows doesn’t always grow it upon demand reliably) but these didn’t improve it.
I worked through an interesting problem today occurring with Desired State Configuration tied into Azure Automation.
In this scenario, Azure Virtual Machines are connected to Azure Automation for Desired State Configuration, being configured with a variety of resources. One of them is failing, the “Disk” resource, although it was previously working in the past.
The PowerShell DSC resource ‘[Disk]EVolume’ with SourceInfo ‘::1208::13::Disk’ threw one or more non-terminating errors while running the Test-TargetResource functionality. These errors are logged to the ETW channel called Microsoft-Windows-DSC/Operational. Refer to this channel for more details.
I need more detail, so lets see what the interactive run of DSC on the failing virtual machine. While I can view the logs located in “C:\Windows\System32\Configuration\ConfigurationStatus”, I found that in this case, this doesn’t reveal any additional detail beyond what the Azure Portal does.
Looking at the event logs on my VM, I can see that the nightly defrag from the default scheduled task has been failing:
The volume Websites (E:) was not optimized because an error was encountered: The parameter is incorrect. (0x80070057)
Looking at the docs for Get-PartitionSupportedSize, there is a note that says “This cmdlet starts the “Optimize Drive” (defragsvc) service.”
Based on timing of events, it appears like defrag hasn’t been able to successfully complete in a long time, because it’s duration is longer than the DSC refresh interval – when DSC runs and eventually triggers Get-PartitionSupportedSize, it aborts the defrag. Even running this manually I can see this occur:
The user cancelled the operation. (0x890000006)
At this point, I don’t know what it is about a failed defrag state that is causing Get-PartitionSupportedSize to fail with “Invalid Parameter” – even when defrag isn’t running that cmdlet fails.
However, in one of my systems with this problem, if I ensure that the defrag successfully finishes (by manually running it after each time DSC kills it, making incremental progress), then we can see Get-PartitionSupportedSize all of a sudden succeed!
And following this, DSC now succeeds!
So if you’re seeing “Invalid Parameter” coming from Get-PartitionSupportedSize, make sure you’ve got successful Defrag happening on that volume!