Install Visio volume license alongside Office 365

At my company we get a certain number of seats of Visio volume license from our Microsoft Partner benefits, but it is not a subscription product and the volume key cannot be simply entered to activate Visio, because our Office 365 is installed as O365ProPlusRetail.

Here I’ll describe how to install the Visio volume license product side-by-side with our standard installation of Office 365, which is a supported scenario according to this Microsoft doc.

At first I had problems where installing the product “VisioPro2019Volume” would downgrade my O365 build to the 1808 version. I posted a comment on this Microsoft Docs issue because it seemed related, and received a very helpful reply from Martin with another Microsoft Docs page describing how to build lean and dynamic install packages for O365. This was the key to configuring my XML file for proper side-by-side installation.

I also used the super helpful Office Customization Tool in support of figuring this out.

Procedure

    1. Download the Office Deployment Tool
    2. Install the tool to a folder on your workstation. Create a new XML file named “newvisio.xml” to look like this (update the PIDKEY value)
<Configuration ID="6659a04f-037d-4f23-b8a2-64c851090a5e"	>
  <Add Version="MatchInstalled" 	>
    <Product ID="VisioPro2019Volume" PIDKEY="insert Key"	>
      <Language ID="MatchInstalled" TargetProduct="O365ProPlusRetail" /	>
    </Product	>
  </Add	>
</Configuration	>
  1. Note: You can get the Key from the Partner portal (go into MPN ? Benefits ? Software) and enter it into your xml file
  2. If you have Visio installed already as part of O365, you will need to remove it:
      1. Create yourself a configuration xml file named removevisio.xml:
    <Configuration ID="ba49a53d-04c0-44a6-b591-c099d9c4e6ed">
      <Remove OfficeClientEdition="64" Channel="Monthly">
        <Product ID="VisioProRetail">
          <Language ID="en-us" />
          <ExcludeApp ID="Access" />
          <ExcludeApp ID="Excel" />
          <ExcludeApp ID="Groove" />
          <ExcludeApp ID="Lync" />
          <ExcludeApp ID="OneDrive" />
          <ExcludeApp ID="OneNote" />
          <ExcludeApp ID="Outlook" />
          <ExcludeApp ID="PowerPoint" />
          <ExcludeApp ID="Publisher" />
          <ExcludeApp ID="Teams" />
          <ExcludeApp ID="Word" />
        </Product>
      </Remove>
      <Display Level="Full" AcceptEULA="TRUE" />
    </Configuration>
    1. Place this file where the deployment tool was downloaded, and then run it:
    2. setup /configure removevisio.xml
    3. The uninstall will proceed. You’ll see an image like this; don’t be alarmed its not removing all of office if you set your XML properly
  3. Once the previous version uninstall is complete, install with this command:
  4. setup /configure newvisio.xml
  5. Now you should have Office 365 on subscription, but Visio on Volume License.

Add PRTG sensor through PowerShell module

Once I established a way to perform an Azure Function call from a PRTG REST sensor, I wanted to programmatically deploy this sensor for consistency across multiple environments.

To do so I made use of the excellent PrtgAPI project from lordmilko. This wraps the PRTG REST API into easy to use and understand PowerShell, which is very effective for my team’s ability to use and re-use things written with it.

What follows is an extremely bare-bones method of deploying a PRTG custom REST sensor using PrtgAPI with PowerShell. What it does not contain are appropriate tests or error-handling, parameter change handling, or removal. Thus warned, use at your own risk.

GitHub script file

This example is specifically built for an Azure Function which monitors an Application Gateway health probe, and the parameters are tailored as such.
First I start by defining the parameters to be used, aligned with the Application Gateway http setting I want to monitor – as in my previous post I wanted a separate sensor for each http setting associated with a listener.

# Input Variables - update accordingly
$ClientCode = "abc"
$resourcegroupname = "$($clientcode)-srv-rg" # resource group where the Application Gateway is located
$appgwname = "$clientcode AppGw" # name of the application gateway
$httpsettingname = @("Test","Prod")
$probename = "Probe1"
$subscriptionid = "549d4d62" # Client Azure subscription
 
$appsvcname = "AppGw Monitor"
$appsvcFQDN = "prod-appsvc.azurewebsites.net"
$functionName = "Get-AppGw-Health"
$functionKey = "secret function key for PRTG"
$tenantid = "f5f7b07a" # Azure tenant id

Then I use the PrtgAPI module – install if not already, and connect to a PRTG Core:

# PrtgAPI is being used: https://github.com/lordmilko/PrtgAPI
#Check if module is installed, and install it if not
$moduleinstalled = get-module prtgapi -listavailable
if ($moduleinstalled) {
    Write-Host "Pre-requisite Module is installed, will continue"
}
else {
    Install-Package PrtgAPI -Source PSGallery -Force
    Write-Host "Installing PrtgApi from the PSGallery"
}
 
# Check and see if we're already connected to PRTG Core
$prtgconnection = Get-PrtgClient
if (!$prtgconnection) {
    # If not, make the connection
    Write-Host "You will now be prompted for your PRTG credentials"
    Connect-PrtgServer prtgserver.domain.com
}
Write-Host "Connected to PRTG. Proceeding with setup."

Next I test for existence of a device, which will be used to branch whether I am creating a sensor under the device, or need to create the device and the sensor together:

# Using our defined group structure, check for the device existence
$device = Get-Probe $probename | Get-Group "Services" | Get-Group "Application Gateways" | Get-Device $appsvcname

Because I have one Application Gateway with multple http settings serving multiple back-end pools, I need to do a foreach loop around each object in the httpsetting array. Inside that loop, I build the JSON body that will be passed in the POST request to the Azure Function:

$Body = @"
{
    "httpsettingname": "$setting",
    "resourcegroupname": "$resourcegroupname",
    "appgwname": "$appgwname",
    "subscriptionid": "$subscriptionid",
    "tenantid": "$tenantid"
}
"@

Now I check for the device and sensor (fairly self-explanatory) and finally get to the meat and potatoes of the sensor creation.

Up first is defining the parameters for the sensor that will be created. The wiki for PrtgAPI recommends the use of Dynamic Parameters and to start by constructing a set of DynamicSensorParameters from the raw sensor type identifier.

Once I have that in a PowerShell object, I begin to apply my own values for various parameter attributes:

# Gather a default set of parameters for the type of sensor that we want to create
            $params = Get-Device $device | New-SensorParameters -RawType restcustom | Select-object -First 1 # selecting first because PrtgApi seems to find multiple devices with same name
            # For some reason, above command creates two objects in Params, so we only target the first one by getting -First 1
 
            # Populate the sensor parameters with our desired values
            $params.query = "/api/$($functionName)?code=$functionKey"
            $params.jsonfile = "azureappgwhealth.template" # use the standard template that was built
            $params.protocol = 1 # sets as HTTPS
            $params.requestbody = $body
            $params.Interval = "00:5:00" # 5 minute interval, deviates from the default
            $params.requesttype = 1 # this makes it a POST instead of GET
            if ($setting -like "*prod*") {
                # Set some Tags on the sensor
                $params.Tags = @("restcustomsensor", "restsensor", "Tier2", "$($ClientCode.toUpper())", "ApplicationGateway", "PRTGMaintenance", "Production")
            }
            else {
                # Assume Test if not prod, set a different set of Tags
                $params.Tags = @("restcustomsensor", "restsensor", "Tier2", "$($ClientCode.toUpper())", "ApplicationGateway", "PRTGMaintenance", "NonProduction")
            }
            $params.Name = "$($appgwname) $($setting)"

Finally, I can create the sensor by using this one line:

$sensor = $device | Add-Sensor $params # Create the sensor

That’s pretty much it! For each of the different http settings and health probe back-ends I modify the variables at the top of this script, and then run it again; obviously there are much better ways to make this reproducible, however I haven’t been able to commit that time.

Azure Function called from PRTG REST sensor

Using PRTG for service monitoring has been fairly effective for me, particularly with HTTP Advanced sensors to monitor a website. However, as more and more Azure resources are utilized, I want to continue to centralize my alerting and notifications within a single platform and that means integrating some Azure resource status into PRTG.

At a high level, here’s what needs to happen:

  • Azure Function triggered on HTTP POST to query Azure resources and return data
  • PRTG custom sensor template to interpret the results of the Function data
  • PRTG custom lookup to establish a default up/down threshold
  • PRTG REST sensor to trigger the function, and use the sensor template and custom lookup to properly display results

Azure Function

For my first use-case, I wanted to see the health status of the back-end pool members of an Application Gateway:

The intended goal is if a member becomes unhealthy, PRTG would alert using our normal mechanisms. I could use an Azure Monitor alert to trigger something when this event happens, but in reality it is easier for PRTG to poll rather than Azure Monitor to trigger something in PRTG.

I’m not going to cover the full walk-through of building an Azure Function; instead here is a good starting point.

I’m using a PowerShell function, where the full source can be found here: GitHub link

Here’s a snippet of the part doing the heavy lifting:

#Proceed if all request body parameters are found
if ($appgwname -and $httpsettingname -and $resourcegroupname -and $subscriptionid -and $tenantid) {
    $status = [HttpStatusCode]::OK
    # Make sure we're using the right Subscription
    Select-AzSubscription -SubscriptionID $subscriptionid -TenantID $tenantid
    # Get the health status, using the Expanded Resource parameter
    $healthexpand = Get-AzApplicationGatewayBackendHealth -Name $appgwname -ResourceGroupName $resourcegroupname -ExpandResource "backendhealth/applicationgatewayresource"
    # If serving multiple sites out of one AppGw, use the parameter $httpsettingname to filter so we can better organize in PRTG
    $filtered = $healthexpand.BackEndAddressPools.BackEndhttpsettingscollection | where-object { $_.Backendhttpsettings.Name -eq "$($httpsettingname)-httpsetting" }
    # Return results as boolean integers, either health or not. Could modify this to be additional values if desired
    $items = $filtered.Servers | select-object Address, @{Name = 'Health'; Expression = { if ($_.Health -eq "Healthy") { 1 } else { 0 } } }
    # Add a top-level property so that the PRTG custom sensor template can interpret the results properly
    $body = @{ items = $items }
 
}

You can test this function using the Azure Functions GUI or Postman, or PowerShell like this:

    $appsvcname = "appsvc.azurewebsites.net"
    $functionName = "Get-AppGw-Health"
    $functionKey = " insert key here "
    $Body = @"
{
    "httpsettingname": " prodint ",
    "resourcegroupname": " rgname ",
    "appgwname": " appgw name ",
    "subscriptionid": " subid ",
    "tenantid": " tenant id "
}
"@
    $URI = "https://$($appsvcname)/api/$($functionName)?code=$functionKey"
    Invoke-RestMethod -Uri $URI -Method Post -body $body -ContentType "application/json"

Expected results would look like this:

You can see the “Function Key” parameter in this code above; I’ve created a function key for our PRTG to authenticate against, rather than making this function part of a private VNET.

 

PRTG Custom Sensor Template

Now, in order to have PRTG interpret the results of that JSON body, and automatically create channels associated with each Item, we need to use a custom sensor template.

Here’s mine (github link):

{
  "prtg": {
    "description" : {
      "device": "azureapplicationgateway",
      "query": "/api/Get-AppGw-Health?code={key}",
      "comment": "Documentation is in Doc Library"
    },
    "result": [
      {
 
	"value": {
            #1: $..({ @.Address : @.Health }).*
        },
        "valueLookup": "prtg.customlookups.healthyunhealthy.stateonok",
        "LimitMode":0,
        "unit": "Custom",
      }
    ]
  }
}

The important part here is the “value” properties. The syntax for this isn’t officially documented, but Paessler support has provided a couple examples that I used, such as here and here. The #1 before the first semi-colon sets the channel name, and uses the first argument referenced within the braces (@.Address in this case). What is inside the braces associates the channel name to the value that is returned, which in this case is the boolean integer for “Health” that the Azure Function returns.

The valueLookup property references the custom lookup explained below.

This Sensor Template file needs to exist on the PRTG Probe that will be calling it, in this location:

Program Files (x86)\PRTG Network Monitor\Custom Sensors\rest

 

PRTG Custom Lookup

I want to be able to control what PRTG detects as “down” based on the values that my Function is returning. To do so, we apply a custom lookup (github link):

<?xml version="1.0" encoding="UTF-8"?>
  <ValueLookup id="prtg.customlookups.healthyunhealthy.stateonok" desiredValue="1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="PaeValueLookup.xsd">
    <Lookups>
      <SingleInt state="Error" value="0">
        Unhealthy
      </SingleInt>
      <SingleInt state="Ok" value="1">
        Healthy
      </SingleInt>
    </Lookups>
  </ValueLookup>

This matches my 0 or 1 return values to the PRTG states that a channel can be in. You can modify this to suit your Function return values.

This lookup file needs to exist on the PRTG Core, in this location:

Program Files (x86)\PRTG Network Monitor\lookups\custom

PRTG REST sensor

So lets put it all together now. Create a PRTG REST Sensor, and apply the following settings to it:

Your PostData should match the parameters you’re receiving into your Azure Function. The REST query directs to the URL where your function is located, and uses the function key that was generated for authentication purposes.

Important to note: the REST sensor depends upon the Device it is created under to generate the hostname for URL. This means you’ll need to have a Device created with a hostname matching your Function App URL, where the sensor’s REST Query references the function name itself.

Make sure the REST Configuration is directed to your custom sensor template; this can only be set on creation.

Then for each channel that gets auto-detected, you will go and modify it’s settings and apply the custom lookup:

With that, you should have a good sensor in PRTG relaying important information collected from Azure!

I’m also using this same method to collect Azure Site Recovery status from Azure and report it within PRTG, using this Function.

Update SCCM Maintenance Window through PowerShell

Sometimes trying to stay bleeding edge is tough – today’s example is when you want to install updates through SCCM a day or two after Patch Tuesday, particularly using Maintenance Windows to allow restarts within a specific time frame.

We use automatic deployment rules to update a Software Update Group every Patch Tuesday – scheduling this is easy because its always the second Tuesday of a month.

But I want the updates to install on Wednesday night, or Thursday morning. This ensures strict compliance requirements can be met, but allows 24 hours for testing. Can’t just schedule the install and restarts for “second Wednesday of the month” though, because if the first of the month is a Wednesday (like this month) then our actual install date happens to be the THIRD Wednesday of the month.

Previously we solved this by manually updating Maintenance Window schedules every month, painstaking selecting the right date and hoping we didn’t mess it up.

PowerShell took that risk away:

 

SCCM_UpdateMaintenanceWindowSchedule.ps1 – on GitHub

 

See the comments on the script for details of how it works. As a very brief overview:

I found Tim Curwick’s method of calculating Patch Tuesday, and used that in my script to reliably calculate my Wednesday or Thursday install date.

This script runs as System from an SCCM server on the 1st of every month. It performs the calculation, updates maintenance windows on specific Collections, and outputs a log to file and emails results.

 

Azure Site Recovery – ARM template

Programmatic deployment of Azure Site Recovery for Azure VMs – that’s the target I started with for a project a little while ago.

There is a large amount of information for Azure Site Recovery on Microsoft’s Docs, however the amount of available code online to programmatically deploy a full setup is very sparse! Much of what Microsoft provides is PowerShell, which isn’t idempotent and doesn’t fit with my current tooling (Terraform and declarative infrastructure-as-code). I did consider Azure CLI, but couldn’t find any references for Site Recovery.

While working on this I came across an Azure Quickstart which is a little incomplete and starts with some good variable definition but quickly devolves into hard-coded values from where it came from; I also discovered a blog post from Pratap Bhaskar  which was useful especially for understanding the Loop mechanism in the template, but it didn’t go far enough for my purposes.

So I spun up a few VMs, manually configured ASR, and did an ARM template export. What I received was a huge amount of properties on the resources that I was sure were relevant to runtime only, not creation. This was also a good reference, but not exactly where I needed to be.

The final piece of the puzzle that got me on my way was the REST API docs for Site Recovery. With this in hand, and the other sources I had at my disposal, I had the references I needed to begin putting together an ARM Template that would configure my environment end-to-end including Recovery Plans with automation runbooks.

There are a lot of design decisions I made when building this to fit my environment, some of which won’t make sense without additional context; most of which I can’t provide. That’s ok, as I hope it at least serves as a reference for “what’s possible” to others who come across it. Here is the overall structure:

  • Pre-define and create destination resources like resource groups, virtual networks, and subnets with Terraform
  • Deploy ASR for a subset of Virtual Machines, targeting the destination resources
    • Include dependent resources like source-side storage account for cache, and azure automation account in the same region and subscription as the ASR resources
  • Deploy a Recovery Plan that provides runbook functionality to configure a Test Failover environment
    • This environment was intended to be completely isolated, to ensure there’s no chance of contamination with prod
    • Current design of my web servers has multiple ip configurations; these need to be replaced
    • Access to the environment is provided through a Jump Host, which needs a known-in-advance IP address

 

I have documented this output on a GitHub repo called arm-azuresiterecovery

 

 

The majority of my time was actually spent cleaning up the template into proper parameters and variables for effective re-use, and then solving all the syntax challenges and typos that come along with that.

 

There are still some loose ends in what I’ve created, around certain manual steps still required. However as I’m sure is common in the industry, it is good enough to deploy and I must move on – fine-tuning comes later.