UltraVNC and DirectAccess

I’ve finally got UltraVNC working properly with a computer connected through DirectAccess. Here’s how a very brief description of how its working.

First I deploy UltraVNC 1.2.1.2 through a group policy shutdown script.

UltraVNCInstall

This script uninstalls previous versions, installs the latest version, copies the ultravnc.ini for standard settings, an ACL file for Active Directory group integration, and then overwrites winvnc.exe with the IPv6 version provided by the creator. Then it deletes the desktop icons and starts the service.

The important part is that IPv6 executable, because otherwise the traffic will not pass through to the DirectAccess client.

The other key note is that to make a connection, it needs to be a Reverse VNC connection; initiated from the server (user) side.

First, the Technician needs to start a VNCViewer listening session. The easiest way is to create a shortcut with this command target:

"c:\program files\uvnc bvba\UltraVNC\vncviewer.exe" -listen -dsmplugin SecureVNCPlugin64.dsm

This will place an icon in your task tray, and prepare your computer for receiving the request.

We deploy a shortcut to all workstations in the Start Menu that runs this command:

"c:\program files\uvnc bvba\UltraVNC\winvnc.exe" -connect

When clicking on the shortcut, the user will be presented with a window, which they can enter the Technician’s workstation name to connect.

In this method, there is no authentication needed because it was initiated by the server.

 

Azure Automation – Distribution Group

I’ve been wanting to expand my skills and abilities within the Azure platform recently, and this week a great opportunity arose for this very thing.

We provide dynamic distribution groups within Office 365 for staff, based upon AD attributes populates with DirSync. The primary example here is our “SubCompany All Staff” list, which is created with this query filter:

New-DynamicDistributionGroup -alias SubCompanyAllStaff -Name "SubCompany All Staff" -RecipientFilter { ((RecipientType -eq 'UserMailbox') -and (Company -like 'SubCompanyName*')) }

Now we have executive staff who are transitioning from this SubCompany to a parent company, which is causing an issue. Their Company attributes are no longer “SubCompanyName”, but they wish to remain on the distribution list to receive communications. However, IT does not (will not) maintain a manual membership list for this type of group. We could create an “exceptions” group, add these staff in, mail-enable the group and make sure it’s part of the query filter above, but this means every time someone changes company, we have to remember to add them into the exception group.

A better way is to filter on the ProxyAddress attribute; anyone who has an @subcompany.com email should be part of the distribution group. Those who switch companies maintain legacy @subcompany.com addresses as aliases. For example:

Get-Recipient -Filter {EmailAddresses -like *@subcompany.com*}

However, while the Get-Recipient command works here, this type of filtering is NOT supported within a dynamic group recipient filter; it returns no results.

This leads me to the solution: use an Azure Automation runbook to run the Get-Recipient command to populate a static group on a schedule.

The Query

First, I needed a functioning PowerShell query. I initially arrived at this here:

# Add users matching criteria to list.
 
$users = Get-Recipient -Filter {EmailAddresses -like "*subcompany.com*" -and RecipientTypeDetailsValue -eq 'UserMailbox'}
 
foreach($user in $users) { Add-DistributionGroupMember -Identity CompanyTestAll@company.onmicrosoft.com -member $user.WindowsLiveID }
 
# Remove those who no longer have an alias
 
$users = Get-DistributionGroupMember -Identity "Company TestAll" | Get-Recipient -Filter {EmailAddresses -notlike "*@subcompany.com*"} 
 
foreach ($user in $users) {Remove-DistributionGroupMember -Identity "Company TestAll" -member $user.WindowsLiveID -confirm:$false}

While this did perform what I was after, it was messy. I wanted this to run on a schedule, so the first ‘addition’ command would effectively error out on the majority of the group members. When I tried to run the second command as my service account in the run book, it threw a large amount of errors about the users not being found on the domain controller.

After a lot of back-and-forth testing, I had this command that works perfectly:

Update-DistributionGroupMember "Company TestAll" -Members (Get-Recipient -Filter {EmailAddresses -like "*subcompany.com*" -and RecipientTypeDetailsValue -eq 'UserMailbox'}|select -ExpandProperty Alias) -confirm:$false

 

Automation

Having never looked at Azure Automation before, I started reading. This blog post was a helpful start, and I went through quite a bit of the docs.microsoft.com site reading about Runbooks, PowerShell vs. PowerShell Workflow, credential access, and so on.

I created an Automation account that is intended to be specific to Office 365 scripts, along with a corresponding Resource Group. Since this isn’t going to touch any other Azure resources, I did not create a RunAs account with it.

Next I began looking at the best way to connect to Exchange Online. I quickly found this module which looked very promising. However, after importing it as an Asset in the account, the example given of “Set-ModuleCredential” wasn’t found as a valid cmdlet, and I couldn’t see it in the .psm1 file either. Since there were many other examples of connecting to Exchange, I moved on.

The next problem I found was that when I tried to do the standard RemoteSession to connect, testing the runbook would fail without any meaningful error message; just 3 attempts and then “suspended”. Based on that behavior, I found this on the feedback portal for Azure, which is exactly what was happening. The method provided by Rune Myrhaug works great:

# Pull credential from Automation assets
$credObject = Get-AutomationPSCredential -Name "svc_auto_azure"
 
#Connect to Exchange Online
$ExchangeOnlineSession = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/PowerShell-LiveID -Credential $credObject -Authentication Basic -AllowRedirection
Import-Module (Import-PSSession -Session $ExchangeOnlineSession -AllowClobber -DisableNameChecking) -Global

I created a service account in Active Directory, synchronized it to O365 with DirSync, and then used the Exchange Admin portal to ensure it had appropriate roles to manage Distribution Groups. I added this account in as a credential asset, and created a daily schedule as well.

The final step was to publish the runbook and associate it with my schedule, and now I have an automated quasi-dynamic distribution group!

I can forsee this potentially being useful for groups that are not built along attribute filters. Perhaps something like a SharePoint list that is populated by administrative staff or department heads, which is then consumed in the runbook and applied to a distribution group.

 

 

 

Sage 300 PayStub Self-Service PDF export

In April we deployed an add-on for our Sage 300 payroll software, which provides a self-service web interface for employees to access their pay stubs.

We recently discovered that when an employee is set to Inactive or terminated within Sage 300, their access to the portal stops working, which is a good thing. However, in our case this employee had simply transferred to the parent company, and still required access.

The self-service module is self-contained code which I can’t modify, so instead I sought out a method of extracting the PDF paystub from the database for a particular user.

Most of the information below was derived from this StackOverflow question and answer.

First, configure your SQL server to allow advanced command line commands:

EXEC sp_configure 'show advanced options', 1;  
GO  
RECONFIGURE;  
GO
EXEC sp_configure 'xp_cmdshell',1  
GO  
RECONFIGURE;  
GO

Next, you need to export a format file for the specific table we’re looking for. I used this command from my own client computer which has SQL Management Studio installed:

bcp dbo.PAYROLLCHECK format nul -S servername -T -d COMDat -n -f formatfile.fmt

This produced a file which looked like the following:

12.0
5
1       SQLCHAR             0       12      ""   1     EMPLOYEEID                     SQL_Latin1_General_CP1_CI_AS
2       SQLDATETIME         0       8       ""   2     PERIODENDDATE                  ""
3       SQLINT              0       4       ""   3     ENTRYSEQ                       ""
4       SQLIMAGE            4       0       ""   4     PDF                            ""
5       SQLDECIMAL          1       19      ""   5     CHECK                          ""

I trimmed this file down so that it looked exactly like this:

10.0
1
1       SQLIMAGE            0       0       ""   1     PDF                            ""

Some things to note here:

  1. The first row needs to match the SQL native client version that your SQL server is running; it was 12.0 when I ran it on my Windows 10 machine with SQL Server 2012 Management Studio, but my actual server is running SQL Server 2008 R2. Thus, I changed the digit to 10.0
  2. The column with “PDF” needs to match the column name that contains your file data. Some examples I found on the Internet weren’t clear here

The last step was to build the query to do the export. It is important to note that in this query the BCD command runs from the server, so any local paths where you’re going to store the format file or output PDFs must be accessible on the SQL Server itself.

This query fills a temporary table with all rows matching a specific employee ID.
Then the while loop iterates through each of those rows, and exports the PDF, naming it based on the Period End date as an increment value.

I declare the employee ID we’re looking for twice in between each ‘GO’ statement, because I’m lazy and didn’t look for a better way.

IF OBJECT_ID('dbo.tmp_for_export', 'U') IS NOT NULL
 DROP TABLE tmp_for_export
 GO
 --First Declaration of the Employee ID to use
 DECLARE @EmployeeID VARCHAR(20) = '1234';
 SELECT
 ROW_NUMBER() OVER(ORDER BY [CHECK]) AS rownum
 ,p.PDF
 ,p.employeeid
 ,CONVERT(CHAR(10),[PERIODENDDATE],126) AS PeriodEnd
 INTO tmp_for_export
 FROM payrollcheck p
 WHERE employeeid = @EmployeeID
 GO
 
DECLARE @EmployeeID VARCHAR(20) = '1234';
 DECLARE @cnt INT
 DECLARE @i INT
 DECLARE @filename VARCHAR(512)
 DECLARE @extension VARCHAR(20)
 DECLARE @SQL VARCHAR(4000)
 SET @cnt = (SELECT COUNT(*) FROM Payrollcheck WHERE EmployeeID = @EmployeeID)
 SET @i = 1
 SET @extension = '.pdf'
 
while @i <= @cnt
 BEGIN
 SET @filename = (SELECT PeriodEnd FROM tmp_for_export WHERE rownum = @i)
 SET @SQL = 'BCP "SELECT PDF from dbo.tmp_for_export WHERE rownum = '+str(@i)+'" QUERYOUT C:\Temp\'+@EmployeeID+'-'+@filename+@extension+' -f C:\Temp\formatfile.fmt -S servername -d COMDAT -T '
 EXEC master.dbo.xp_CmdShell @sql
 set @i = @i+1
 end
 GO

 

This resulted in a bunch of PDFs in C:\Temp on my SQL Server, which I could then go and grab, and give to the payroll department.

 

Replace Network Printers

I recently went through a Print Server replacement, and wanted to simplify the process for my staff. Our printer connections are quite variable, so we do not push them out through Group Policy but instead let staff connect to printers they have permission for on an as-needed basis.

I wanted a method of re-assigning connections from the old print server to the new one, and I found such a method here:

http://chris-nullpayload.rhcloud.com/2014/06/powershell-login-script-to-update-printer-mappings-and-keep-the-default-printer

Chris (above) in turn had sourced his script from Boe here and added a line for retaining the default printer.

https://learn-powershell.net/2012/11/15/use-powershell-logon-script-to-update-printer-mappings/

 

However, I ran into a problem with Chris’ script where it would keep reporting and error that the SetDefaultPrinter vbscript couldn’t find a printer with the specified name.

A few quick tests showed that the result produced in the $default variable can’t be referenced by ShareName, but must rather use the “Name” of the printer to set the default.

Thus, I modified the last line to look like this:

(New-Object -ComObject WScript.Network).SetDefaultPrinter("\\$($newPrintServer)\" + $($default.ShareName))

I see now that the second comment on Chris’ post by Andy produces this same result.

Radgrid load empty on Page Load

I spent way too much time finding a resolution to this issue, but I finally found an ugly way to do it so hopefully this helps someone else.

I’m doing a simple data load with a RadGrid from an SQLDataSource. However due to the nature of the page and data the initial grid is populated with over 1000 records and performance is terrible.

I want to use the Filters within each RadGrid column as my ‘Search Parameters’ rather than building it manually and passing them in to the SELECT statement.

I tried to set the DataSourceID to empty in the NeedDataSource event however I ran into a few obscure issues.

Here’s what I ended up with:
Define your Radgrid, with the DataSourceID included. In the MasterTableView, ensure you have a “NoMasterRecordsText” value.

<telerik:RadGrid ID="rgd_ComplianceEdit" runat="server" CellSpacing="0"  AllowPaging="False" Height="600px" AllowFilteringByColumn="true" 
        GridLines="None" AutoGenerateColumns="False" ShowFooter="False" OnItemDataBound="rgd_ComplianceEdit_ItemDataBound" OnInsertCommand="rgd_ComplianceEdit_InsertCommand" 
        OnUpdateCommand="rgd_ComplianceEdit_UpdateCommand" OnNeedDataSource="rgd_ComplianceEdit_NeedDataSource"  DataSourceID="sql_compliance"
        AllowAutomaticUpdates="True" AllowAutomaticDeletes="True" Width="100%" AllowAutomaticInserts="True" GroupingSettings-CaseSensitive="false">
        <ClientSettings>
            <Scrolling AllowScroll="true" ScrollHeight="300px" UseStaticHeaders="true" />
            <Selecting AllowRowSelect="False"></Selecting>
        </ClientSettings>
        <MasterTableView DataSourceID="sql_compliance" DataKeyNames="id" ShowHeadersWhenNoRecords="true" NoMasterRecordsText="Enter Search Term(s) for record display."
            CommandItemDisplay="Top" CommandItemSettings-ShowRefreshButton="false" EditMode="InPlace">

Then create an empty SQLDataSource in addition to your real data source:

<asp:SqlDataSource runat="server" ID="sql_empty" ConnectionString="<%$ ConnectionStrings:SQLConnectionString %>"
        SelectCommand="SELECT 1 where 1 = 0"></asp:SqlDataSource>

 

Now in the PageLoad event, use an if Statement to choose which data source to load.

if (!IsPostBack) { rgd_ComplianceEdit.DataSourceID = "sql_empty"; }
if (IsPostBack) { rgd_ComplianceEdit.DataSourceID = "sql_compliance"; rgd_ComplianceEdit.Rebind(); }

 

When I load the page, the RadGrid comes up empty. When I filter on a column, it populates!