I had someone email me about DFSR and file locking based on some old comments on a Ned Pyle post from the Ask DS blog on technet.
I wrote up a detailed response of how I’ve handled this issue in my environment, and decided that response would make a good post to share, so here it is:
So stuff crashed in April 2013, I gave up on DFSR and we intended on using Silver Peak WAN acceleration across our site-to-site VPNs, combined with RDS or Citrix XenApp.
Then my company was acquired by a much bigger engineering firm, and all my plans were interrupted. We basically left our users to suffer (since we had already used DFSR to create a single namespace, when we quit DFSR we just kept a ‘single source of truth’) for months because the parent company was going to roll out a global WAN.
XenApp works fantastic for apps like ArcGIS Desktop and GlobalMapper, but
I had begun the process of looking at Panzura, which I raved about here. It still looks like the best overall solution to a centralized file server with geographic collaboration on AutoCAD files. But the price tag for two offices was close to $60k not counting cloud storage costs, and I had just purchased a whole bunch of new storage so i didn’t want new Panzura hardware, and they don’t support Hyper-V for their virtual instance (which still blows my mind).
I recently ran across an issue that took a while to resolve, but really shouldn’t have.
I have an SqlDataSource which I’m purposefully using an UPDATE statement for in the DeleteCommand, like this:
DeleteCommand="UPDATE [dbo].[UDIC_EquipInventory] SET [Status] = '6' ,[AssignedTo] = '' ,[RetiredDate] = GetDate() ,[LastEditDate] = GetDate() ,[LastOperator] = @LastOperator WHERE UnitID = @unitid"
I have both the parameters there included properly like this:
<DeleteParameters> <asp:Parameter Name="unitid" DbType="String" /> <asp:SessionParameter Name="LastOperator" SessionField="SignedInEmployeeId" DbType="String" /> </DeleteParameters>
However every time I tried to use the DeleteCommand on my Telerik RadGrid tied to this datasource, I received the error “Must declare the scalar variable @LastOperator”.
It turns out according to this MSDN article on Delete Parameters, a parameter CANNOT be the same name as a boundfield representing this column.
I simply changed the parameter name in my query and DeleteParameters sections, and now it works!
I’ve been trying to work with my ISP (Shaw Business) to sort out why I cannot use two static IPs at the same time.
The technician I spoke to this morning informed me that it’s as simple as modifying a setting on the modem that was originally missed. As this was described as a disruptive event, I decided to wait and call back in the evening. The tech also said he’d put a note on file so that when I call back it is clear what I need done.
But he lied! I called back, there was no note on the file, and the new tech said I needed to call Sales because it’s an account provisioning thing. Of course, now Sales is closed and they won’t be open out of business hours for me.
Tip: if you say you’re going to make a note, please do it!
I should have grabbed the guy’s name and gotten a ticket number, that is my mistake.
My organization uses Office 365 for Exchange and Lync service, although Lync has recently been set up as on-premise.
For a while now the low-resolution photo of Lync has been bothering me, so I set out trying to find a way to use a high resolution photo instead.
Microsoft allows a 648×648 photo to be stored in an Exchange 2013 mailbox, which is then used for Lync.
To begin, set up your environment to connect to Office 365 with Powershell:
- Find a Windows Server 2008 R2 host and perform the rest of the steps on it
- I found this wouldn’t work on my Windows 8.1 host, or a Windows 7 host
- Install the Microsoft Online Services Sign-In Assistant
- Install the Azure Active Directory Module
- Create a .ps1 file with the following contents:
$NPO = New-PSSessionOption -ProxyAccessType IEConfig $cred = Get-Credential $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionURI https://ps.outlook.com/powershell/?proxymethod=rps -Credential $cred -Authentication Basic -AllowRedirection -SessionOption $NPO Import-PSSession $Session Import-Module MSOnline Connect-MsolService -Credential $cred
- Using Powershell, run the cmdlet
- When prompted for credentials, enter your Office 365 administrative credentials
- Now use the following Powershell commands:
$photo = ([Byte] $(Get-Content -Path "C:\Users\jmiles\Desktop\IMG_0067_Lync.jpg" -Encoding Byte -ReadCount 0)) Set-UserPhoto -Identity "Jeff Miles" -PictureData $photo -Confirm:$False Set-UserPhoto -Identity "Jeff Miles" -Save -Confirm:$False
Replace the photo location and the identity name in the commands above.
That’s it! Now your photo should be nice and clear when in a Lync call.
### Note: I originally wrote this in 2012, and cannot even remember why I didn’t publish it. Hopefully it is useful as-is ###
I’m on a bit of a monitoring and benchmarking kick lately and have recently gathered some information on my environment regarding storage. I wanted to see a comparison of something low-end, my legacy storage, and our existing production storage.
I recently read this post by Brent Ozar on monitoring a SAN using CrystalDiskMark, as an easy way to get basic information. As Brent mentions, this isn’t an in depth benchmarking tool, but does give some idea of performance. Below are my results from the tests I performed. I couldn’t run CrystalDiskMark directly on my Hyper-V host since it’s Hyper-V Server 2008 R2 (native), so instead I ran it within the VM’s itself.
A few interesting things I noticed here:
- Sequential reads on our MD3220i are slower than our legacy MD3000 DAS
- But sequential writes are over twice as fast, reaching the Ethernet limit of gigabit ISCSI
- The 14 disk RAID10 LUN turned out to be faster than the 16 disk LUN on the MD3220i. Although CrystalDiskMark does 5 tests of each set, perhaps these numbers would still average closer together with more test.
- Our 4x1TB RAID10 on the MD3000 was actually slower than a single laptop hard drive on some tests.
The biggest thing was the limit of ISCSI throughput. I wonder how much higher these values might be if we had fiber channel or were direct attached? Overall I’m not too concerned about it, because we don’t need a huge amount of throughput, but rather more IOPs.
With that in mind, I decided to check out SQLIO to see how things stack up there. Again I used a blog post from Brent Ozar to set this up.
I had a problem with interpreting the results, in that my test laptop was reporting IOPs of 1150, which is ridiculous. Upon further inspection, this was because I was using 2 threads with 8 I/O’s per thread. Reducing this down to just 1 thread with one I/O operation gave better results.
I still didn’t know whether I could trust it though, so I changed my test.bat to start a perfmon log at the same time, monitoring disk writes/sec, disk reads/sec and disk transfers/sec for the drive.
Then I compared the CUMULATIVE mark from SQLIO to the Max disk transfer/sec in perfmon.