Mourning the loss of Dekiwiki

In 2007 I implemented Mindtouch Dekiwiki as an internal website. I was excited by the quality of the product, the extensibility, and most importantly the active community and developer interaction.

As Mindtouch grew, they incorporated more excellent features, and were very open about the direction of the product. The developer site contained a wealth of information about release schedules, change logs and tutorials for the product. I found myself interacting with the community regularly, posting information on how to do certain things, answering questions on the forums and filing bugs. I was encouraged that employees of Mindtouch were directly interacting with the community on a daily basis.

I don’t think the company that produced that product exists any longer. It’s been replaced by a buzzword-spewing, no-face entity who hides real information about their product behind flowery text and email signup forms. I lose a lot of interest when companies make it difficult to gather information about their product, and If you take a look at mindtouch.com you’ll find they’re one of the worst offenders. There is only a single line or two on the main page describing what the company is all about, and the product page itself is still undergoing an identity crisis with names such as Mindtouch TCS, Social Help System and just plain Mindtouch in most of the descriptions.

There is actually very little information about the product, no demo site or feature comparisions, no cost information or even licensing models. What videos there are, are hidden behind email sign-ups. As a company marketing it’s product, if you feel the need to hide what you’ve built behind a sign-up wall, you immediately make everyone who comes across it distrustful. Are they not confident enough to proudly show it off? Are they worried that the licensing model will scare people away?

The developer site is effectively dead, and so is Mindtouch Core, the open source edition of what once was Deki Wiki. Most frustrating of all is that Mindtouch Core (the open source derivative of Mindtouch) is no where to be found. Its basically impossible to find it now, which is sad considering that I remember the owners and developers passionate exclamations a few years ago that the product started as open source, would always remain open source and that it’s community would always be vital.

To be clear, I can’t really blame the leaders of Mindtouch for moving in this direction. One look at their customer list and you can see it’s a profitable transition. However I can’t help but be disappointed and a little bit betrayed. To see the excitement of the developers and of the users who are adding to the product is one thing that makes it very attractive. It tells me that the product is good enough to make people talk about it and invest time into it.

Dekiwiki, I’ll remember you fondly as I go looking for your replacement.

 

 

PowerEdge R410 – PERC S300 and Server 2012

I encountered an epic planning fail on my part this morning. During the annual Christmas Maintenance that I perform, I decided to upgrade my Hyper-V cluster to Server 2012, since I have the Datacenter licenses and want to use some of the features.

So I took my time and prepared the upgrade, documenting every step I would take so that things go smoothly.

When I began the process, I booted into the Server 2012 media, loaded the PERC S300 drivers from Dell (Server 2008 R2 should work right?) and got a blue screen.

I did some quick digging, and the S300 is not supported for Server 2012, and never will be.

I can’t believe that I didn’t think to check my RAID card for compatibility before embarking upon this. At this point I don’t want to run my Hyper-V host without RAID (by converting the S300 into a regular SATA controller) so I’ll look at the PERC H200 or H700 from my Dell rep.

 

Why is it that with all the preparations made, there’s always something forgotten?

 

Microsoft Virtual Academy (and Cloud Leader contest)

Earlier today I received an email from TechNet Flash with an interesting mention of what it called the “Cloud Leader Contest“.

The premise is to get the word out about Server 2012, System Center 2012, and the corresponding private cloud features of those software. The benefit is an early bird prize of a laptop, tablet and phone (ending November 30th) and a grand prize of $5000.

If you’re reading this and its not November 30th yet, don’t register because I want a better chance to win! Joking aside, the contest ends in July 2013 so you have lots of time to build up those entries.

Not my image, from Microsoft site
I wish I could look this good standing in my server room

After I registered and perused the entry methods, I discovered the second section where you can complete courses on the Microsoft Virtual Academy (MVA) for additional entries.

To be honest, I had never heard of MVA before, and at first glance I assumed it was very sales-oriented with lots of buzzwords. As I dug deeper into one of the courses and the PDF manual and whitepapers, I discovered a wealth of technical information was available on the new features of Server 2012.

I’m sure that I could have discovered this information browsing through TechNet but having it all accessible in one easy area, with progress tracking too is very attractive.

 

After completed one course and a couple of the module exams, I was surprised to find that my profile was listed #209th in my country (Canada), despite having only accumulated 54 points. Perhaps I’m not the only one who hasn’t heard of MVA before.

I can definitely say that as I consider the new MSCA and MSCE certificates for Server 2012 the MVA will be a valuable starting point towards that goal.

 

 

 

Fallability – physically and at work

I haven’t made a post in a few weeks, which is unfortunate. However my excuse this time is a good one, as I broke my right clavicle playing Ultimate Frisbee. It was a failed dive into the end zone, and then a trip directly to the hospital for x-rays. Due to that, life at home and work (once I went back) has been busy to say the least.

Since I’ve been back, I’ve been busy setting up my new backup storage device, based on the SuperMicro SC847 chassis. My plan was to use 18 x 3.0 TB drives in a RAIDZ2 array provided by FreeNAS. This would then be presented to my Backup Exec 2012 server through iSCSI to be used as a deduplication storage device.

During my testing before the purchase everything worked well and seems like a great idea. Even after I received the parts and built the unit, my testing was showing that this was a good solution.

However, the second goal of this unit was to place it across the parking lot in a secondary ‘off-site’ building, connected by two 802.11ac wireless devices. Once this move was completed, performance on that wireless network was shown to not be any better than my existing 802.11a network over the same stretch, and then a drive in the FreeNAS array failed.

Its not a mystery that I’m not a Unix/FreeBSD guru, and the use of ZFS is completely new to me. Because of that I had difficulty troubleshooting and performing the proper steps to take the array out of degraded state (there’s a bit more detail there, it wasn’t simply a failed drive, but its not entirely relevant to this post).

At this point I had to take a step back, and admit that while FreeNAS and ZFS was a good idea in theory, with my department’s experience and standardization on Windows, it was a mistake to implement. There’s a large amount of risk in having a backup solution that isn’t easy to maintain and fix, especially when the person who set it up is out for a week (with a broken bone or something, one of those rare events that never happen right?)

I’m not entirely sure what I’m going to replace FreeNAS with; perhaps utilize the SAS HBA capabilities to build the array with Windows on top, or perhaps Windows Server 2012 storage pools/spaces.

What I’m actually getting at is that its ok to be wrong;  its ok to have make a mistake and need to re-design a solution. One person, or even a team of people won’t design perfect infrastructure every time. The important part is to humbly admit it, and then improve. Make something better, and do a lessons-learned/post-mortem to ensure that the right questions are asked the next time a project comes up on the radar.

 

MD3220i Disk Group Expansion

Now that I’ve finished migrating data from one disk group to another on my MD3220i, I needed to break down the old disk group (14x300GB 10k SAS) and then use those disks to expand another disk group from the original purchase (16x300GB 10k SAS) giving one big RAID10 array of 30 disks.

I basically went backwards from adding storage, by:

  • Removing the Cluster Storage Volume (within Failover Cluster Manager)
  • Deleting the Virtual Disk as available cluster storage (within Failover Cluster Manager)
  • Remove the host-to-LUN mapping (within the Dell MD Storage Manager)
  • Delete the Disk Group and Virtual Disk (within the Dell MD Storage Manager)

Following those steps, I had 14 disk available storage to re-allocate.

I then went to the Logical tab, right clicked on the disk group, and selected “Add Free Capacity”

 

This gave me a list of disks to use, but I could only select two; the wizard wouldn’t let me select the full set.

Unfortunately, I chose two and completed the wizard, which went directly into reconfiguring the Virtual Disk. This is unfortunate since it’s a 14 hour operation, and I really don’t want to do that 7 more times (for each pair).

I took a look in the CLI guide for the MD3220i, but there doesn’t appear to be any options for adding free capacity to a disk group. At this point I’m stuck waiting for each disk pair to be added.