File Server and remote office collaboration

0

I had someone email me about DFSR and file locking based on some old comments on a Ned Pyle post from the Ask DS blog on technet.
I wrote up a detailed response of how I’ve handled this issue in my environment, and decided that response would make a good post to share, so here it is:

 

It’s a long story actually. Not sure if you read my post from April 2013, but I effectively gave up on DFSR due to the instability we were experiencing. Still not sure if it was underlying storage causing it, or just the scale at which we were operating. PeerSync couldn’t keep up with the rate of change on our file servers when we piloted it, and neither could PeerLock when we tried to integrate it with DFSR.

So stuff crashed in April 2013, I gave up on DFSR and we intended on using Silver Peak WAN acceleration across our site-to-site VPNs, combined with RDS or Citrix XenApp.

Then my company was acquired by a much bigger engineering firm, and all my plans were interrupted. We basically left our users to suffer (since we had already used DFSR to create a single namespace, when we quit DFSR we just kept a ‘single source of truth’) for months because the parent company was going to roll out a global WAN.

That did eventually happen in late 2014, so at that point we had Riverbed Steelheads providing 80% data reduction across the WAN and a 7ms link between our biggest branch and head office (where the most pain was for ACAD production).

We occasionally heard rumbling from staff that it wasn’t good enough, and so we pressed on-wards with purchase of a 30-seat Citrix XenApp environment with a dedicated nVidia GRID K2 card for GPU acceleration. However we really screwed up the user communication and training and failed to get adequate buy-in from CAD management to enforce it’s use.

XenApp works fantastic for apps like ArcGIS Desktop and GlobalMapper, but

we really struggled with mouse lag in all AutoCAD streams (except Revit which we don’t use). We went as far as a specialized consultant out of the UK and they couldn’t figure it out either.

I had begun the process of looking at Panzura, which I raved about here. It still looks like the best overall solution to a centralized file server with geographic collaboration on AutoCAD files. But the price tag for two offices was close to $60k not counting cloud storage costs, and I had just purchased a whole bunch of new storage so i didn’t want new Panzura hardware, and they don’t support Hyper-V for their virtual instance (which still blows my mind).

Ultimately, a couple months ago we had a big meeting with all our CAD managers and consensus was that:
– The majority of work performed in branch offices is acceptable due to the Riverbed Steelheads and low latency provided by the global WAN
– The work that doesn’t perform well due to huge file sizes would be done with XenApp and the mouse lag would just be accepted. But I don’t think the CAD managers are actively encouraging this.

And that’s where we’re at right now; not really solving the issue but trying to find improvements as best we can.

SqlDataSource Delete Parameters and BoundFields

0

I recently ran across an issue that took a while to resolve, but really shouldn’t have.

I have an SqlDataSource which I’m purposefully using an UPDATE statement for in the DeleteCommand, like this:

DeleteCommand="UPDATE [dbo].[UDIC_EquipInventory]
                        SET [Status] = '6'
                            ,[AssignedTo] = ''
                            ,[RetiredDate] = GetDate()
                            ,[LastEditDate] = GetDate()
                            ,[LastOperator] = @LastOperator
                         WHERE UnitID = @unitid"

I have both the parameters there included properly like this:

<DeleteParameters>
            <asp:Parameter Name="unitid" DbType="String" />
            <asp:SessionParameter Name="LastOperator" SessionField="SignedInEmployeeId" DbType="String" />
 </DeleteParameters>

 

However every time I tried to use the DeleteCommand on my Telerik RadGrid tied to this datasource, I received the error “Must declare the scalar variable @LastOperator”.

 

It turns out according to this MSDN article on Delete Parameters, a parameter CANNOT be the same name as a boundfield representing this column.

I simply changed the parameter name in my query and DeleteParameters sections, and now it works!

Customer Service Tip

0

I’ve been trying to work with my ISP (Shaw Business) to sort out why I cannot use two static IPs at the same time.
The technician I spoke to this morning informed me that it’s as simple as modifying a setting on the modem that was originally missed. As this was described as a disruptive event, I decided to wait and call back in the evening. The tech also said he’d put a note on file so that when I call back it is clear what I need done.

But he lied! I called back, there was no note on the file, and the new tech said I needed to call Sales because it’s an account provisioning thing. Of course, now Sales is closed and they won’t be open out of business hours for me.

Tip: if you say you’re going to make a note, please do it!

I should have grabbed the guy’s name and gotten a ticket number, that is my mistake.

High Resolution Photo in Lync for Office 365

0

My organization uses Office 365 for Exchange and Lync service, although Lync has recently been set up as on-premise.

For a while now the low-resolution photo of Lync has been bothering me, so I set out trying to find a way to use a high resolution photo instead.

Microsoft allows a 648×648 photo to be stored in an Exchange 2013 mailbox, which is then used for Lync.

To begin, set up your environment to connect to Office 365 with Powershell:

$NPO = New-PSSessionOption -ProxyAccessType IEConfig
$cred = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionURI https://ps.outlook.com/powershell/?proxymethod=rps -Credential $cred -Authentication Basic -AllowRedirection -SessionOption $NPO
Import-PSSession $Session
Import-Module MSOnline
Connect-MsolService -Credential $cred
  • Using Powershell, run the cmdlet
  • When prompted for credentials, enter your Office 365 administrative credentials
  • Now use the following Powershell commands:
$photo = ([Byte[]] $(Get-Content -Path "C:\Users\jmiles\Desktop\IMG_0067_Lync.jpg" -Encoding Byte -ReadCount 0))
Set-UserPhoto -Identity "Jeff Miles" -PictureData $photo -Confirm:$False
Set-UserPhoto -Identity "Jeff Miles" -Save -Confirm:$False

Replace the photo location and the identity name in the commands above.

 

That’s it! Now your photo should be nice and clear when in a Lync call.

 

Sources:

https://technet.microsoft.com/en-us/library/jj688150.aspx

http://stackoverflow.com/questions/25199254/automated-script-to-change-user-photos-in-microsoft-exchange-2013-powershell/26150403#26150403

https://technet.microsoft.com/en-us/library/jj151815.aspx#bkmk_installmodule

Disk Monitoring with CrystalDiskMark and SQLIO

0

### Note: I originally wrote this in 2012, and cannot even remember why I didn’t publish it. Hopefully it is useful as-is ###

 

I’m on a bit of a monitoring and benchmarking kick lately and have recently gathered some information on my environment regarding storage. I wanted to see a comparison of something low-end, my legacy storage, and our existing production storage.

I recently read this post by Brent Ozar on monitoring a SAN using CrystalDiskMark, as an easy way to get basic information. As Brent mentions, this isn’t an in depth benchmarking tool, but does give some idea of performance. Below are my results from the tests I performed. I couldn’t run CrystalDiskMark directly on my Hyper-V host since it’s Hyper-V Server 2008 R2 (native), so instead I ran it within the VM’s itself.

Details of CrystalDiskMark

Details of CrystalDiskMark

A few interesting things I noticed here:

  • Sequential reads on our MD3220i are slower than our legacy MD3000 DAS
    • But sequential writes are over twice as fast, reaching the Ethernet limit of gigabit ISCSI
  • The 14 disk RAID10 LUN turned out to be faster than the 16 disk LUN on the MD3220i. Although CrystalDiskMark does 5 tests of each set, perhaps these numbers would still average closer together with more test.
  • Our 4x1TB RAID10 on the MD3000 was actually slower than a single laptop hard drive on some tests.

 

The biggest thing was the limit of ISCSI throughput. I wonder how much higher these values might be if we had fiber channel or were direct attached? Overall I’m not too concerned about it, because we don’t need a huge amount of throughput, but rather more IOPs.

With that in mind, I decided to check out SQLIO to see how things stack up there. Again I used a blog post from Brent Ozar to set this up.

I had a problem with interpreting the results, in that my test laptop was reporting IOPs of 1150, which is ridiculous. Upon further inspection, this was because I was using 2 threads with 8 I/O’s per thread. Reducing this down to just 1 thread with one I/O operation gave better results.

I still didn’t know whether I could trust it though, so I changed my test.bat to start a perfmon log at the same time, monitoring disk writes/sec, disk reads/sec and disk transfers/sec for the drive.
Then I compared the CUMULATIVE mark from SQLIO to the Max disk transfer/sec in perfmon.

 

 

 

Go to Top