I’ve been trying to work with my ISP (Shaw Business) to sort out why I cannot use two static IPs at the same time.
The technician I spoke to this morning informed me that it’s as simple as modifying a setting on the modem that was originally missed. As this was described as a disruptive event, I decided to wait and call back in the evening. The tech also said he’d put a note on file so that when I call back it is clear what I need done.
But he lied! I called back, there was no note on the file, and the new tech said I needed to call Sales because it’s an account provisioning thing. Of course, now Sales is closed and they won’t be open out of business hours for me.
Tip: if you say you’re going to make a note, please do it!
I should have grabbed the guy’s name and gotten a ticket number, that is my mistake.
### Note: I originally wrote this in 2012, and cannot even remember why I didn’t publish it. Hopefully it is useful as-is ###
I’m on a bit of a monitoring and benchmarking kick lately and have recently gathered some information on my environment regarding storage. I wanted to see a comparison of something low-end, my legacy storage, and our existing production storage.
I recently read this post by Brent Ozar on monitoring a SAN using CrystalDiskMark, as an easy way to get basic information. As Brent mentions, this isn’t an in depth benchmarking tool, but does give some idea of performance. Below are my results from the tests I performed. I couldn’t run CrystalDiskMark directly on my Hyper-V host since it’s Hyper-V Server 2008 R2 (native), so instead I ran it within the VM’s itself.
A few interesting things I noticed here:
Sequential reads on our MD3220i are slower than our legacy MD3000 DAS
But sequential writes are over twice as fast, reaching the Ethernet limit of gigabit ISCSI
The 14 disk RAID10 LUN turned out to be faster than the 16 disk LUN on the MD3220i. Although CrystalDiskMark does 5 tests of each set, perhaps these numbers would still average closer together with more test.
Our 4x1TB RAID10 on the MD3000 was actually slower than a single laptop hard drive on some tests.
The biggest thing was the limit of ISCSI throughput. I wonder how much higher these values might be if we had fiber channel or were direct attached? Overall I’m not too concerned about it, because we don’t need a huge amount of throughput, but rather more IOPs.
With that in mind, I decided to check out SQLIO to see how things stack up there. Again I used a blog post from Brent Ozar to set this up.
I had a problem with interpreting the results, in that my test laptop was reporting IOPs of 1150, which is ridiculous. Upon further inspection, this was because I was using 2 threads with 8 I/O’s per thread. Reducing this down to just 1 thread with one I/O operation gave better results.
I still didn’t know whether I could trust it though, so I changed my test.bat to start a perfmon log at the same time, monitoring disk writes/sec, disk reads/sec and disk transfers/sec for the drive.
Then I compared the CUMULATIVE mark from SQLIO to the Max disk transfer/sec in perfmon.
My Mindtouch Core wiki VM was originally running on VMWare server a long time ago. I needed to migrate this to Hyper-V so that I could decommission my use of VMware.
I originally wrote this post more than 2 years ago, but am publishing it now in case someone finds it useful.
I’ve been using DPM 2012 R2 for a few months now, having replaced Symantec Backup Exec 2010 due to growing data sizes and increased struggles with tape rotations.
However I’ve found a number of deficiencies with DPM that make me wish we were able to implement something like Veeam instead.
Here’s a short summary of what I need DPM to do better:
No deduplication support!
Disk volume based system leaves ‘islands of storage’ unusable and inefficient
Prevents disk from being shared for other backup purposes such as Hyper-V replication
Lack of long-term disk backups
Our TechNet reading has shown that since DPM uses VSS it can only take a maximum of 64 snapshots for a protected resource. We’re currently unsure if this applies to VMs as a protected resource
Poor visibility into DPM running operations
No clarity on what the data transfer represents
No information on compression ratios
No transfer speed indicators
No easy way to see status of data across all protected sources
No dashboards or easy summaries.
Many clicks to drill down into each protection group
Poor configurability on logging
Email notifications are very chatty, or non-existent without much middle ground.
No escalation methods or schedules
No automated test restore capabilities or scheduling
Limited Reporting
Only 6 reports out of the box, and must use SQL Reporting Services to build anything new (which I am adept with, but that’s besides the point)
Tape Library support seems cumbersome, compression isn’t work despite it reporting as running
No built in VM replication technology for Disaster Recovery scenarios
Very low community knowledge or support
For example, trying to find information on tape compression is impossible; no one online is talking about DPM and how it’s used.
No central console for viewing multiple backup source/destination pairs