Visio alternative for network documentation

I’ve been attempting to put effort towards network documentation at work, with the intention of accurate maps for LAN and WAN connectivity.
Microsoft Visio is the first product that comes to mind for this purpose, however it comes with a licensing cost. Here are two alternatives I’ve used:


Right now this is my product of choice; it is fully featured with no limitations, comes with a large amount of icons and shapes for use, and has the ability to import many more.

Using it is very intuitive and quick to pick up.


This web service produces very nice looking drawings, with an easy to use interface and great organization of drawings. It comes with icons and shapes from some of the big brands like AWS and Cisco.

The only reason I moved on from this is that the free tier only allows a specific number of drawings, with a small number of objects each. This limitation proved too great to continue using.

Account Currently Disabled

I encountered and resolved a very strange issue tonight, regarding a locked out account.

Using group policy, I have a bunch of deployment scripts that run at Shutdown. For a select group of computers, these scripts didn’t appear to be running.

My first thought was that the computer account was disconnected from the domain, but all indications appeared that this was not the issue.

I was able to connect to a problem computer with PSexec, and did so with the -S switch as our shutdown scripts run as the SYSTEM account (or NETWORK SERVICE when accessing the file server).

When I ran the command in my script manually, I received this error:

Logon Failure: Account Currently Disabled

This was really strange; everything about the accounts in use were active, and definitely not locked out.

When I modified the command to not use network resources, it succeeded. I looked into the Security event log, and noticed an event saying “A logon was attempted using explicit credentials.”

In this event, it also referenced an old account name and computer name from a legacy Active Directory domain, which was previously migrated from.

This triggered a realization, and my next place to look was the Credential Manager.

Sure enough, I ran the following command on the remote computer:

cmdkey /list

and saw an entry for my DFS path (Target: with credentials referencing the original non-migrated computer name and account which no longer exists! Somehow this must have stuck around for quite some time.

I ran a delete command for that credential:

cmdkey /

And now it’s working properly! Next step, stick this command into Group Policy so that it fixes any other machines having the problem.


Radgrid load empty on Page Load

I spent way too much time finding a resolution to this issue, but I finally found an ugly way to do it so hopefully this helps someone else.

I’m doing a simple data load with a RadGrid from an SQLDataSource. However due to the nature of the page and data the initial grid is populated with over 1000 records and performance is terrible.

I want to use the Filters within each RadGrid column as my ‘Search Parameters’ rather than building it manually and passing them in to the SELECT statement.

I tried to set the DataSourceID to empty in the NeedDataSource event however I ran into a few obscure issues.

Here’s what I ended up with:
Define your Radgrid, with the DataSourceID included. In the MasterTableView, ensure you have a “NoMasterRecordsText” value.

<telerik:RadGrid ID="rgd_ComplianceEdit" runat="server" CellSpacing="0"  AllowPaging="False" Height="600px" AllowFilteringByColumn="true" 
        GridLines="None" AutoGenerateColumns="False" ShowFooter="False" OnItemDataBound="rgd_ComplianceEdit_ItemDataBound" OnInsertCommand="rgd_ComplianceEdit_InsertCommand" 
        OnUpdateCommand="rgd_ComplianceEdit_UpdateCommand" OnNeedDataSource="rgd_ComplianceEdit_NeedDataSource"  DataSourceID="sql_compliance"
        AllowAutomaticUpdates="True" AllowAutomaticDeletes="True" Width="100%" AllowAutomaticInserts="True" GroupingSettings-CaseSensitive="false">
            <Scrolling AllowScroll="true" ScrollHeight="300px" UseStaticHeaders="true" />
            <Selecting AllowRowSelect="False"></Selecting>
        <MasterTableView DataSourceID="sql_compliance" DataKeyNames="id" ShowHeadersWhenNoRecords="true" NoMasterRecordsText="Enter Search Term(s) for record display."
            CommandItemDisplay="Top" CommandItemSettings-ShowRefreshButton="false" EditMode="InPlace">

Then create an empty SQLDataSource in addition to your real data source:

<asp:SqlDataSource runat="server" ID="sql_empty" ConnectionString="<%$ ConnectionStrings:SQLConnectionString %>"
        SelectCommand="SELECT 1 where 1 = 0"></asp:SqlDataSource>


Now in the PageLoad event, use an if Statement to choose which data source to load.

if (!IsPostBack) { rgd_ComplianceEdit.DataSourceID = "sql_empty"; }
if (IsPostBack) { rgd_ComplianceEdit.DataSourceID = "sql_compliance"; rgd_ComplianceEdit.Rebind(); }


When I load the page, the RadGrid comes up empty. When I filter on a column, it populates!

Error 5120 – CSV Paused State

Since implementing CommVault Simpana, I have been receiving almost daily warnings of the following error (Event ID 5120) in my System log:

Cluster Shared Volume 'CSV2' ('CSV2') has entered a paused state because of '(c0000435)'. All I/O will temporarily be queued until a path to the volume is reestablished.

After some thorough investigation resolved a bunch of issues with cluster communication, this error continued to appear.

I was finally able to resolve this from happening by unregistering the EqualLogic VSS Hardware Provider, using this command:

"C:\Program Files\EqualLogic\bin\eqlvss" /unregserver

The strange thing is, I had specifically set CommVault to use the CSV Shadow Copy Provider, with the setting “VSSProviders” on my clients. Despite this, there must have still been some VSS ties to the EQL provider.

Commvault and Hyper-V – my experience

While it is quite simple to find many people talking about Backup Exec and Veeam online, it is much more difficult to find anecdotal experience with Commvault Simpana. Having recently been part of an implementation in my company, I thought I would share my own opinions, particularly as it relates to a Hyper-V environment.  These are my personal views and do not represent my employer in any way.


In short, if you are a Hyper-V environment, I cannot recommend consideration for Commvault Simpana in any capacity. One would be much better served investigating Veeam or Altaro instead, especially since those products are dedicated to virtualization support and have a track record of excellent word-of-mouth.

Simpana V10

Initially I deployed on Simpana V10 SP12. Having completed the required (and outrageously expensive) Commvault training, I had a good understanding of how the system worked and how to implement.

Overall, getting started went smoothly, however it wasn’t very long before I began encountering issues. Here are some of the things I’ve found with V10:

Lack of Change Block Tracking

Somehow during the investigation phase, no one at my company (including myself!) thought to investigate whether Commvault will do Change Block Tracking (CBT) for Hyper-V. And it turns out in V10, it does not. This came as a very large surprise when I went to back up my 6TB virtual machine it the incremental required ~30 hours to complete.

Following an investigation with Commvault support, it was determined that a CRC process is done for every single bit in the VM, to assess whether it has changed or not. There were certain optimizations I made to ensure that this process was as fast as possible, such as changing my EqualLogic MPIO to Round Robin and ensuring the EQL was using 4xNIC with no dedicated management NIC.

By working through some of the other issues below, I was able to mostly mitigate the slow CRC process in my environment but it was a major challenge.

Cluster Shared Volume Owner node

Windows Server 2012 did away with the concept of “Redirected Mode” for backup in a Hyper-V environment, but I don’t think Commvault got the message. While my CSV didn’t go into an actual Redirected Mode, it turns out that only the first Node specified within the Commvault Virtual Client for Hyper-V will stream the backup data, regardless of the owner of the CSV being worked on.

What this means is that in my 2-node cluster, the CRC read process occurred between my cluster hosts on the 1x1Gbe network for cluster communication, rather than happening directly on the node that owned the CSV. This was a huge bottleneck and absolutely killed performance in the cluster.

The solution was to create a Pre-Job powershell script that moved the CSV ownership to one node in the Cluster, which was set as the proxy for the Commvault backup. Not ideal, especially as Windows Server will automatically re-balance the CSV owner as of Server 2012.

Multiple Subclients

To fully saturate my iSCSI connections, I had to split my large VMs into smaller ones. The recommendation I received from Commvault was to not have a VM larger than 2TB. I found that regardless of how many data readers and network streams I configured on the subclient, only a single iSCSI connection was utilized. Once I changed MPIO to Round Robin, all iSCSI connections were utilized but not fully saturated by one subclient.

Now I have 4 subclients, at ~2TB each, running concurrently. This caused some major effort in re-configuring our File Server(s) but thankfully we’re using DFS namespaces to obfuscate the actual server names and it was fairly invisible to our users.

VSS issues

Previously using Microsoft DPM or Backup Exec, I never experienced VSS issues from the Hyper-V hosts or the guests. With Simpana, out of 7 jobs running nightly, at least one is failing with some kind of VSS error. Whether this is “writer is in a transient state” or just errors getting the snap in the first place it is a regular occurrence. I have mitigated some of these issues by ensuring that all guest drives have more than 15% free space on them, including the PageFile.VHDX volumes I’ve created per VM.

Still, for a top-tier product I would not expect as many errors to occur especially when the environment was fully stable prior to Simpana.

Lack of VSS Hardware Support

I would LOVE to use my EqualLogic hardware VSS provider, but it is not supported, and I have found zero indications that progress is being made in supporting additional VSS hardware providers.  I actually tried it out and the backup was successfully completed, however there were numerous errors on the Hyper-V node and since it is an unsupported platform I cannot use it in production.

Simpana V11

Now Version 11 SP2 has been released, and there are two crucial improvements that it is supposed to provide for Hyper-V:

Change Block Tracking

A 3rd party file system driver has been implemented for CBT in Hyper-V environments. After initial implementation (which requires a new Full backup) it seemed to work quite effectively; my 4 large VMs each taking 7-9 hours now required 40-80 minutes for an incremental.

However, the second weekend something happened where a subclient job crashed, put a CSV into Redirected Mode, and hard-locked a cluster node when I tried to return the CSV to normal. Since then, CBT has been failing on at least 50% of my subclients, even after having Commvault support perform a reset on it.

At this point I am not very trustworthy of such a new feature.

CSV Owner recognition

V11 was supposed to introduce new algorithms for CSV owner identification, allowing all Cluster nodes to act as coordinators for backup of subclients. While this mostly works, there are still odd quirks (that I haven’t dug into deeply yet) such as a weekend job last night that was again saturating my cluster communication network between nodes and effectively locking up every VM running on the cluster. I think I’m still safer moving the CSV owner before every job right now.