Azure Update Management deployments – programmatically

I’ve been working through a method to add Scheduled Deployments with Azure Update Management. The particular problem is that the docs for Update Management don’t reference any kind of programmatic way to add Scheduled Deployments, and while I did find the PowerShell cmdlets (in preview), they don’t allow for use of the new features such as Reboot Control and Dynamic Groups.

After a little bit more investigation, I came across the Azure REST API for “Software Update Configurations”. Using a REST API is new to me but at first glance it seems like it would achieve what I wanted, so I dove in.

GET Configuration

I wanted to test with a GET command first, to make sure that I understood how to use the API correctly and interpret the output.

First I created a simple update deployment in the Portal to test with.

When I began looking at how to make the API call, I found a lot of instruction on creating a Service Principal in Azure Active Directory, in order for my code to authenticate against the Azure REST API and make the calls it needs. In my case I’m building a script to run ad-hoc, not part of an integrated program or code-base that needs repeated and fully automated authentication. My thinking was, “if I’m authenticated to Azure in a PowerShell session (through Login-AzureRMAccount) why can’t I just use that to make the call?”

This exact thing is possible, as I discovered through this PowerShell Gallery submission. After loading and testing with this function, I found that it could successfully use my PowerShell session as the access token for my API call.

Putting that together with the GET example, I ended up with this PowerShell script:

# Define my parameters to the Automation Account
$resourceGroupName = "test-rg"
$automationAccountName = "test-automation"
$SubscriptionId = "<subscription id>"
$ConfigName = "2018-08-SUG"
 
# Import the function, contained in a file from the same directory
. .\Get-AzureRmCachedAccessToken.ps1
# Use the function to get the Access Token
$BearerToken = ('Bearer {0}' -f (Get-AzureRmCachedAccessToken))
# Add the Access Token into proper format
$RequestHeader = @{
    "Content-Type"  = "application/json";
    "Authorization" = "$BearerToken"
}
# Build the URI referencing my parameters
$URI = "https://management.azure.com/subscriptions/$($SubscriptionID)/"`
    + "resourceGroups/$resourcegroupname/providers/Microsoft.Automation/"`
    + "automationAccounts/$automationaccountname/softwareUpdateConfigurations/$($ConfigName)?api-version=2017-05-15-preview"
 
# Use the URI and the Request Header (with access token) and the method GET
$GetResponse = Invoke-RestMethod -Uri $URI -Method GET -header $RequestHeader

 

This returned output that looked like this:

id         : /subscriptions/f745d13d/resourceGroups/test-rg/providers/Microsoft.Autom
             ation/automationAccounts/test-automation/softwareUpdateConfigurations/2018-08-SUG
name       : 2018-08-SUG
type       :
properties : @{updateConfiguration=; scheduleInfo=; provisioningState=Succeeded; createdBy={scrubbed}; error=; tasks=;
             creationTime=2018-08-16T14:09:34.773+00:00; lastModifiedBy=;
             lastModifiedTime=2018-10-24T03:56:51.02+00:00}

Fantastic! This can be turned into read-able JSON by piping your $GetResponse like this:

 $GetResponse | ConvertTo-JSON

PUT Configuration

Let’s look at what it takes to use the PUT example from the REST API reference. Primarily, the difference is passing in a JSON Body to the “Invoke-RestMethod”, and changing the method to PUT rather than GET.

Before I get to working examples, I ran into problems trying to use the API reference that I want to highlight. I kept getting obscure errors on the “Targets” section of the JSON body that I couldn’t figure out for a while, until I began looking very closely at the output of my GET from an update deployment created in the portal and compared it against what I was trying to do with my JSON.

Something that particularly helped here was piping my “$Get-Response” like this:

$GetResponse | ConvertTo-JSON -Depth 10

This converts the output of the RestMethod into JSON, and ensures that the default depth of 5 is overridden so that it expands all the objects within the Targets array.

What I noticed is that the “Targets” object returns a proper structure with a single nested object (“azureQueries”) which itself is a list (as denoted by the square brackets):

 "targets": {
                "azureQueries": [
                    {
                        "scope": [ ],
                        "tagSettings": {},
                        "locations": null
                    }
                ]
            }

However, this is what the API reference uses as it’s structure:

"targets": [
    {
      "azureQueries": {
        "scope": [ ],
        "tagSettings": { },
        "locations":  null
      }
    }
  ]

Note that the square brackets for the Target object shouldn’t be there, and that they’re missing on the AzureQueries object.

Once this was solved, my very basic “Create” script with static JSON seemed to work.

Working Example

Here’s an example of my full PowerShell script, which creates a recurring schedule for Definition Updates applied against a specific resource group with any VM having the tag = “Client1”.

# Define my parameters to the Automation Account
$resourceGroupName = "test-rg"
$automationAccountName = "test-automation"
$SubscriptionId = "<subscription_id>"
$ConfigName = "MalwareDefinitions"
 
# Import the function, contained in a file from the same directory
. .\Get-AzureRmCachedAccessToken.ps1
# Use the function to get the Access Token
$BearerToken = ('Bearer {0}' -f (Get-AzureRmCachedAccessToken))
# Add the Access Token into proper format
$RequestHeader = @{
    "Content-Type"  = "application/json";
    "Authorization" = "$BearerToken"
}
 
$Body = @"
{
  "properties": {
    "updateConfiguration": {
      "operatingSystem": "Windows",
      "duration": "PT0H30M",
      "windows": {
        "excludedKbNumbers": [    ],
        "includedUpdateClassifications": "Definition",
        "rebootSetting": "Never"
      },
      "azureVirtualMachines": [
 
      ],
      "targets": 
        {
          "azureQueries": [{
                    "scope": [
                        "/subscriptions/$SubscriptionId/resourceGroups/$resourceGroupName"
                    ],
                    "tagSettings": {
                        "tags": {
                            "ClientID": [
                                "Client1"
                            ]
                        },
                        "filterOperator": "Any"
                    },
                    "locations": null
                }]
        }
    },
    "scheduleInfo": {
      "frequency": "Hour",
      "startTime": "2018-10-31T12:22:57+00:00",
      "timeZone": "America/Los_Angeles",
      "interval": 2,
      "isEnabled": true
    }
  }
}
"@
 
$URI = "https://management.azure.com/subscriptions/$($SubscriptionId)/"`
    + "resourceGroups/$($resourcegroupname)/providers/Microsoft.Automation/"`
    + "automationAccounts/$($automationaccountname)/softwareUpdateConfigurations/$($ConfigName)?api-version=2017-05-15-preview"
 
#This creates the update scheduled deployment
$Response = Invoke-RestMethod -Uri $URI -Method Put -body $body -header $RequestHeader

 

Azure Function – Resolve DNS

As part of my search to provide outbound Deny on an Azure NSG with whitelisted FQDN entries, I started looking at Azure Functions.

The idea is that I would have an Automation runbook on a schedule, which called my function for a variety of domain names, receiving the resolved IP addresses in return. These would then be compared against outbound NSG rules, and if the resolved IP differs from what is in the NSG, it would update it.

In reality there isn’t much need for this, since you can do the DNS resolution right in the runbook with this:

$currentIpAddress = [system.net.dns]::GetHostByName("$fqdn").AddressList.IPAddressToString

There are other limitations with this idea as well:

  • for a globally-managed DNS behind some type of CDN or round-robin mechanism, its possible that IP resolution would continually be different. Take “smtp.office365.com” for example.
  • There isn’t a way to manage wildcard whitelists – “*.windowsupdate.com” isn’t something you can resolve to individual IP addresses.

All that being said, I still used this as a learning opportunity for my first function.

To begin, in the Azure Portal I went to the “App Services” blade, clicked “Add”, and searched for Function:

During creation I accepted most of the defaults, and was left with a v2 Function App and the initial “HttpTriggerCSharp1” function.

I am by no means a programmer, and certainly not familiar with C# from ASP.net Core as evidenced by my previous post. With that in mind, here is the contents of my function that I ended up with:

#r "Newtonsoft.Json"
 
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
using System.Text;
 
public static async Task<string[]> Run(HttpRequest req, ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");
 
    string name = req.Query["name"];
 
    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);
    name = name ?? data?.name;
    List collectedIP = new List();
    IPAddress[] ipaddresses = null;
    if (name != null){
        try
        {
            // Putting this in the Try because if it errored out I wanted to see that
            // as a defined message rather than failure of the function
            ipaddresses = Dns.GetHostAddresses(name);
        }
        catch (Exception)
        {
            log.LogInformation("Did not resolve IP from: " + name);
            collectedIP.Add("Did not resolve");
        }
 
        if (ipaddresses != null)
            {
                // Knowing that multiple IPs could be returned for a record, used a ForEach
                foreach (IPAddress ip in ipaddresses)
                {
                    log.LogInformation("Resolved " + name + " to " + ip.ToString());
                    // Add the resolved IP to a string list
                    collectedIP.Add(ip.ToString());
                    log.LogInformation("Added IP to list");
                }
                log.LogInformation("End of If Ipaddresses isn't null");
 
            }
            log.LogInformation("End of If Name isn't null");
    }     
    else
    {
        //return a string
        log.LogInformation("No IP passed In");
        collectedIP.Add("No IP passed in");
    }
    log.LogInformation("Ready to return value");
    // Return the string list as an array to the calling entity
    return collectedIP.ToArray();
}

Now I run this function in Test mode, with a Query parameter as “name”:

And I get results both in my log, and the Output:
To bring this into my Automation runbook, I retrieved the function URL, and since this is a private function it includes my key value in it:
This PowerShell command is then used to invoke the function, with the parameter at the end of the URL:
$IPList = invoke-webrequest 'https://functionappname.azurewebsites.net/api/HttpTriggerCSharp1?code=<privatekey>&name=www.microsoft.com'
Due to the limitations I mentioned at the start of this post, I never went far enough in my runbook to connect this $IPList into logic for updating the NSG.

Azure Function learning

I’m playing around with an Azure Function that I’m going to eventually call from an Azure Automation runbook (post to come in the near future).

During this process, I learned some pretty key things as I was testing and going along. I’m a little embarrassed to post them publicly, but eventually a non-coder like myself somewhere is going to be trying the same thing and maybe this will help.

Function error-ing out during testing

In my function, I used a ForEach loop to add a string to a List<string> for each instances of a collection. My function would compile just fine but would always error out on this one line, with a really generic error. I only knew it was erroring out on this line because I placed a log output on the next line and it would never reach it.

[Error] Executed 'Functions.HttpTriggerCSharp2'

This was my poor coding skills, not beginning the list properly. I was starting it like this:

List<string> collectedIP = null;
But it really needed to be this to initialize it properly:
List<string> collectedIP = new List<string>();
Without this, adding to the collection isn’t possible. I’m sure that anyone who actually knows C# is shaking their head reading this, but I guess that’s what you get when you learn organically without real training.

Building the query string

I want to pass in a parameter to my function, using the query string. At the same time, I also want to use a function key so that this can’t be run anonymously.

Originally, I was trying this, with the ampersand delineating the second query parameter (after the function key):

Invoke-WebRequest https://appname.azurewebsites.net/api/functionName?code=functionKey&amp;name=www.microsoft.com

In PowerShell, this returned the error:

The ampersand (&amp;) character is not allowed. The &amp; operator is reserved for future use; wrap an ampersand in double quotation marks ("&amp;") to pass it as part of a string.

So I tried this string:

Invoke-WebRequest 'https://appname.azurewebsites.net/api/functionName?code=functionKey"&amp;"name=www.microsoft.com'

But then I received the error:

Invoke-WebRequest : The remote server returned an error: (401) Unauthorized.

Well, I knew it wasn’t unauthorized because it ran properly in the browser. The actual fix was stupidly easy, I hadn’t put any quotes around the whole string – either single or double quotes was fine:

Invoke-WebRequest "https://appname.azurewebsites.net/api/functionName?code=functionKey&amp;name=www.microsoft.com"

TLS Support in PowerShell

When I got to actually testing my function, I tried to call it from PowerShell in this format:

Invoke-WebRequest 'https://appname.azurewebsites.net/api/functionName?code=functionKey'

However, upon doing so I received this error:

iwr : The underlying connection was closed: An unexpected error occurred on a send.

A google search led me to discover that PowerShell by default will attempt to use TLS 1.0 for Invoke-WebRequest, unless you’re using PowerShell Core 6.

My Azure Function uses TLS 1.2 by default and as a minimum. This can be found on the “Platform Settings” page of the Function, under “SSL”:

The solution (at least as a workaround) is to force that session to use TLS 1.2 like this:

[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12

Primary source for this information was here.

 

 

 

Azure Multiple NICs or Static IPs through Terraform and DSC

A situation came up where I needed to have two HTTP bindings on port 80 on a web server residing in Azure. This would be 1 binding each on two different IIS sites within a single VM. The creation of this configuration isn’t as simple as one would initially expect, due to some Azure limitations.

There are two options in order to achieve this:

  • Add a secondary virtual network interface to the VM
  • Add a second static IP configuration on the primary virtual network interface of the VM.

For each of these, I wanted to deploy the necessary configuration through Terraform and Desired State Configuration (DSC).

Second IP Configuration

In this scenario, its quite simple to add a second static IP configuration in terraform:

resource "azurerm_network_interface" "testnic" {
  name                = "testnic"
  location            = "${var.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"
  ip_configuration {
    name                          = "ipconfig1"
    subnet_id                     = "${azurerm_subnet.test.id}"
    private_ip_address_allocation = "static"
    private_ip_address            = "10.2.0.5"
    primary                       = true
  }
  ip_configuration {
    name                          = "ipconfig2"
    subnet_id                     = "${azurerm_subnet.test.id}"
    private_ip_address_allocation = "static"
    private_ip_address            = "10.2.0.6"
  }
}

However, when using static IP addresses like this, Azure requires you to perform configuration within the VM as well. This is necessary for both the primary IP configuration, as well as the secondary.

Here’s how you can configure it within DSC:

Configuration dsctest 
{ 
    Import-DscResource -ModuleName PSDesiredStateConfiguration 
    Import-DscResource -ModuleName xPSDesiredStateConfiguration 
    Import-DscResource -moduleName NetworkingDSC
 
    Node localhost {
        ## Rename VMNic
        NetAdapterName RenameNetAdapter 
        { 
            NewName = "PrimaryNIC"  
            Status = "Up" 
            InterfaceNumber = 1 
        }
 
        DhcpClient DisabledDhcpClient
        {
            State          = 'Disabled'
            InterfaceAlias = 'PrimaryNIC'
            AddressFamily  = 'IPv4'
            DependsOn = "[NetAdapterName]RenameNetAdapter "
        }
 
        IPAddress NewIPv4Address
        {
            #Multiple IPs can be comma delimited like this
            IPAddress      = '10.2.0.5/24','10.2.0.6/24'
            InterfaceAlias = 'PrimaryNIC'
            AddressFamily  = 'IPV4'
            DependsOn = "[NetAdapterName]RenameNetAdapter "
        }
 
        # Skip as source on secondary IP address, in order to prevent DNS registration of this second IP
        IPAddressOption SetSkipAsSource
        {
            IPAddress    = '10.2.0.6'
            SkipAsSource = $true
            DependsOn = "[IPAddress]NewIPv4Address"
        }
        DefaultGatewayAddress SetDefaultGateway 
        { 
            Address = '10.2.0.1' 
            InterfaceAlias = 'PrimaryNIC' 
            AddressFamily = 'IPv4' 
        }  
 
    }
 
}

Both the disable DHCP and DefaultGateway resources are required, otherwise you will lose connectivity to your VM.

Once the “SkipAsSource” runs, this sets the proper priority for the IP addresses, matching the Azure configuration.

 

Second NIC

When adding a second NIC in Terraform, you have to add a property (“primary_network_interface_id”) on the VM resource for specifying the primary NIC.

resource "azurerm_network_interface" "testnic1" {
  name                = "testnic"
  location            = "${var.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"
  ip_configuration {
    name                          = "ipconfig1"
    subnet_id                     = "${azurerm_subnet.test.id}"
    private_ip_address_allocation = "static"
    private_ip_address            = "10.2.0.5"
  }
 
}
resource "azurerm_network_interface" "testnic2" {
  name                = "testnic"
  location            = "${var.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"
  ip_configuration {
    name                          = "ipconfig1"
    subnet_id                     = "${azurerm_subnet.test.id}"
    private_ip_address_allocation = "static"
    private_ip_address            = "10.2.0.6"
  }
 
}
 
resource "azurerm_virtual_machine" "test" {
  name                  = "helloworld"
  location              = "${var.location}"
  resource_group_name   = "${azurerm_resource_group.test.name}"
  network_interface_ids = ["${azurerm_network_interface.testnic1.id}","${azurerm_network_interface.testnic2.id}"]
  primary_network_interface_id = "${azurerm_network_interface.test.id}"
  vm_size               = "Standard_A1"
 
  storage_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2016-Datacenter"
    version   = "latest"
  }
 
  storage_os_disk {
    name          = "myosdisk1"
    vhd_uri       = "${azurerm_storage_account.test.primary_blob_endpoint}${azurerm_storage_container.test.name}/myosdisk1.vhd"
    caching       = "ReadWrite"
    create_option = "FromImage"
  }
 
  os_profile {
    computer_name  = "helloworld"
    admin_username = "${var.username}"
    admin_password = "${var.password}"
  }
 
  os_profile_windows_config {
    provision_vm_agent = true
    enable_automatic_upgrades = true
  }
 
}

In this scenario, once again Azure requires additional configuration for this to work. The secondary NIC does not get a default gateway and cannot communicate out of it’s subnet.

One way to solve this with DSC is to apply a default route for that interface, with a higher metric than the primary interface. In this case, we don’t need to specify static IP addresses inside the OS through DSC, since there’s only one per virtual NIC.

Configuration dsctest 
{ 
    Import-DscResource -ModuleName PSDesiredStateConfiguration 
    Import-DscResource -ModuleName xPSDesiredStateConfiguration 
    Import-DscResource -moduleName NetworkingDSC
 
    Node localhost {
        ## Rename VMNic(s) 
        NetAdapterName RenameNetAdapter 
        { 
            NewName = "PrimaryNIC" 
            Status = "Up" 
            InterfaceNumber = 1 
            Name = "Ethernet 2"
        }
        NetAdapterName RenameNetAdapter_apps
        { 
            NewName = "SecondaryNIC"  
            Status = "Up" 
            InterfaceNumber = 2
            Name = "Ethernet 3"
        }
 
        Route NetRoute1
        {
            Ensure = 'Present'
            InterfaceAlias = 'SecondaryNIC'
            AddressFamily = 'IPv4'
            DestinationPrefix = '0.0.0.0'
            NextHop = '10.2.0.1'
            RouteMetric = 200
        }
 
    }
 
}

You could also add a Default Gateway resource to the second NIC, although I haven’t specifically tested that.

Generally speaking, I’d rather add a second IP to a single NIC – having two NICs on the same subnet with the same effective default gateway might function, but doesn’t seem to be best practice to me.

DSC IIS bindings and SSL certificates

This was a tricky one that really didn’t leave me with an ideal solution.

In using DSC, I want my compiled node configurations to be generic, like “webserver” instead of “webserver01”, in order for them to be re-used by VMs sharing the same characteristics, and to avoid duplicating information like IP addressing and VM names which has already been specified in the Terraform configuration for deployment.

At the same time, I want to be able to deploy a web server with a site on port 443 and a self-signed certificate which is created by DSC.

Combining these two ideas was not something I found I could accomplish with pre-existing modules.

I first looked to the xWebsiteAdministration DSC module, which contains the functions xWebsite among others. With this, I could use the following syntax:

xWebSite Admin { 
            Name = "Admin"
            PhysicalPath = "E:\inetpub\AdminSite"
            State = "Started"
            ApplicationPool = "Admin"
            Ensure = "Present"
            BindingInfo     = @(
                MSFT_xWebBindingInformation
                {
                    Protocol              = "HTTPS"
                    Port                  = 443
                    CertificateThumbprint = ""
                    CertificateStore      = "MY"
                })
            LogPath = "E:\inetpub\logs\AdminSite"
            DependsOn = "[File]E_AdminSite"
         }

Here, I want to pass in the thumbprint of my previously generated self-signed certificate. However, I don’t know it’s thumbprint, and it will be unique when I deploy this node configuration between a Web01 and a Web02 VM.

I tried resolving the thumbprint like this: CertificateThumbprint = (Get-ChildItem Cert::\LocalMachine\My | where {$_.Subject -like “*$($Node.ClientCode).com*”}), however that continued to give me DSC errors when it was attempting to apply, about a null reference.

When I came across this StackOverflow question it was clear why this wasn’t working. Since the compilation happens on Azure servers, of course they won’t have a certificate matching my subject name, and thus it can’t generate a thumbprint.

So instead I thought, I can just create a script that applies the certificate after the website has been created. Here is where I ran into additional problems:

  • If I create the xWebsite with HTTPS and 443 with no certificate, it errors
  • If I create the xWebsite with no binding information, it default assigns HTTP with port 80 (conflicting with another website that I have)
  • If I create the xWebsite with HTTP and port 8080 as a placeholder value, now I have IIS listening on ports I don’t actually want open
  • If I create the xWebsite with HTTP and port 8080 and then cleanup that binding afterwards with a Script, on the next run DSC is going to try and re-apply that binding, since I’ve effectively said it is my desired state

Ultimately what I was left with was creating a script that deployed the whole website, and not using xWebsite at all. Like I said, not ideal but it does work to meet my requirements.

Here’s the script that I’ve worked out:

Script WebsiteApps          
        {            
            # Must return a hashtable with at least one key            
            # named 'Result' of type String            
            GetScript = {            
                Return @{            
                    Result = [string]$(Get-ChildItem "Cert:\LocalMachine\My")            
                }            
            }            
 
            # Must return a boolean: $true or $false            
            TestScript = {            
                Import-Module WebAdministration
                # Grab the IP based on the interface name, which is previously set in DSC
                $ip1  = (get-netipaddress -addressfamily ipv4 -InterfaceAlias $($Using:Node.VLAN)).IPAddress
                # Find out if we've got anything bound on this IP for port 443
                $bindcheck = get-webbinding -name "Apps" -IPAddress $ip1 -Port 443
                $bindcheckwildcard = get-webbinding -name "Apps" | where-object { $_.BindingInformation -eq "*:80:"}
                # If site exists
                    if (Test-Path "IIS:\Sites\Apps")
                    { 
                        Write-Verbose "Apps site exists."
                        # if log file setting correct
                        if ((get-itemproperty "IIS:\Sites\Apps" -name logfile).directory -ieq "E:\inetpub\logs\AppsSite")
                           {
                               Write-Verbose "Log file is set correctly."
                                   # if IP bound on port 443
                                   if ($bindcheckhttps)
                                   { 
                                       Write-Verbose "443 is bound for Apps."
                                       #if SSL certificate bound
                                       if (Test-path "IIS:\SslBindings\$ip1!443")
                                       {
                                            Write-Verbose "SSL Certificate is bound for Apps"
                                            # wildcard binding check for Apps
                                            if (-not ($bindcheckwildcard)) {
                                                Write-Verbose "* binding does not exist for Apps."
                                                Return $true
                                            }
                                            else
                                            {
                                                Write-Verbose "* binding exists for Apps."
                                                Return $false
                                            }
                                       }
                                       else
                                       {
                                           Write-Verbose "SSL Certificate is NOT bound for Apps"
                                           Return $false
                                       }
                                   }
                                   else
                                   {
                                       Write-Verbose "IP not bound on 443 for Apps."
                                       Return $false
                                   }
                           }
                            else 
                            {
                               Write-Verbose "Log file path is not set correctly"
                               Return $false
                            } 
                   }
                   else
                   {
                       Write-Verbose "Apps site does not exist"
                       Return $false
                   }
                }    
 
            # Returns nothing            
            SetScript = {
                $computerName = $Env:Computername
                $domainName = $Env:UserDnsDomain
 
                $apps = Get-Item "IIS:\Sites\Apps"
                $ip1  = (get-netipaddress -addressfamily ipv4 -InterfaceAlias $($Using:Node.VLAN)).IPAddress
                $bindcheckhttps = get-webbinding -name "Apps"  -IPAddress $ip1 -Port 443
                $bindcheckwildcard = get-webbinding -name "Apps" | where-object { $_.BindingInformation -eq "*:80:"}
 
                 # If site not exists
                    if (-not (Test-Path "IIS:\Sites\Apps"))
                    { 
                        Write-Verbose "Creating Apps site"
                        New-Website -Name "Apps" -PhysicalPath E:\inetpub\AppsSite -ApplicationPool "Apps"
                    }
 
                    # if port 443 not bound
                    if (-not ($bindcheckhttps))
                    { 
                        Write-Verbose "Binding port 443"
                        $apps = Get-Item "IIS:\Sites\Apps"
                        New-WebBinding -Name $apps.Name -protocol "https" -Port 443 -IPAddress $ip1
                    }
                    if ($bindcheckwildcard)
                    {
                        Write-Verbose "Removing wildcard binding for Apps"
                        get-webbinding -name "Apps" | where-object { $_.BindingInformation -eq "*:80:"} | Remove-Webbinding
                    } 
 
                    #if SSL certificate not bound        
                    if (-not (Test-path "IIS:\SslBindings\$ip1!443"))
                        {
                            Write-Verbose "Binding SSL certificate"
                            Get-ChildItem cert:\LocalMachine\My | where-object { $_.Subject -match "CN\=$Computername\.$DomainName" } | select -First 1 | New-Item IIS:\SslBindings\$ip1!443
                        }
 
                    # if log file setting correct
                    if (-not ((get-itemproperty "IIS:\Sites\Apps" -name logfile).directory -ieq "E:\inetpub\logs\AppsSite"))
                        {
                            Write-Verbose "Setting log file to the proper directory"
                            Set-ItemProperty "IIS:\Sites\Apps" -name logFile -value @{directory="E:\inetpub\logs\AppsSite"}
                        }
                } 
            DependsOn = "[xWebAppPool]Apps","[Script]GenerateSelfSignedCert"
        }