High Resolution Photo in Lync for Office 365

0

My organization uses Office 365 for Exchange and Lync service, although Lync has recently been set up as on-premise.

For a while now the low-resolution photo of Lync has been bothering me, so I set out trying to find a way to use a high resolution photo instead.

Microsoft allows a 648×648 photo to be stored in an Exchange 2013 mailbox, which is then used for Lync.

To begin, set up your environment to connect to Office 365 with Powershell:

$NPO = New-PSSessionOption -ProxyAccessType IEConfig
$cred = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionURI https://ps.outlook.com/powershell/?proxymethod=rps -Credential $cred -Authentication Basic -AllowRedirection -SessionOption $NPO
Import-PSSession $Session
Import-Module MSOnline
Connect-MsolService -Credential $cred
  • Using Powershell, run the cmdlet
  • When prompted for credentials, enter your Office 365 administrative credentials
  • Now use the following Powershell commands:
$photo = ([Byte[]] $(Get-Content -Path "C:\Users\jmiles\Desktop\IMG_0067_Lync.jpg" -Encoding Byte -ReadCount 0))
Set-UserPhoto -Identity "Jeff Miles" -PictureData $photo -Confirm:$False
Set-UserPhoto -Identity "Jeff Miles" -Save -Confirm:$False

Replace the photo location and the identity name in the commands above.

 

That’s it! Now your photo should be nice and clear when in a Lync call.

 

Sources:

https://technet.microsoft.com/en-us/library/jj688150.aspx

http://stackoverflow.com/questions/25199254/automated-script-to-change-user-photos-in-microsoft-exchange-2013-powershell/26150403#26150403

https://technet.microsoft.com/en-us/library/jj151815.aspx#bkmk_installmodule

Disk Monitoring with CrystalDiskMark and SQLIO

0

### Note: I originally wrote this in 2012, and cannot even remember why I didn’t publish it. Hopefully it is useful as-is ###

 

I’m on a bit of a monitoring and benchmarking kick lately and have recently gathered some information on my environment regarding storage. I wanted to see a comparison of something low-end, my legacy storage, and our existing production storage.

I recently read this post by Brent Ozar on monitoring a SAN using CrystalDiskMark, as an easy way to get basic information. As Brent mentions, this isn’t an in depth benchmarking tool, but does give some idea of performance. Below are my results from the tests I performed. I couldn’t run CrystalDiskMark directly on my Hyper-V host since it’s Hyper-V Server 2008 R2 (native), so instead I ran it within the VM’s itself.

Details of CrystalDiskMark

Details of CrystalDiskMark

A few interesting things I noticed here:

  • Sequential reads on our MD3220i are slower than our legacy MD3000 DAS
    • But sequential writes are over twice as fast, reaching the Ethernet limit of gigabit ISCSI
  • The 14 disk RAID10 LUN turned out to be faster than the 16 disk LUN on the MD3220i. Although CrystalDiskMark does 5 tests of each set, perhaps these numbers would still average closer together with more test.
  • Our 4x1TB RAID10 on the MD3000 was actually slower than a single laptop hard drive on some tests.

 

The biggest thing was the limit of ISCSI throughput. I wonder how much higher these values might be if we had fiber channel or were direct attached? Overall I’m not too concerned about it, because we don’t need a huge amount of throughput, but rather more IOPs.

With that in mind, I decided to check out SQLIO to see how things stack up there. Again I used a blog post from Brent Ozar to set this up.

I had a problem with interpreting the results, in that my test laptop was reporting IOPs of 1150, which is ridiculous. Upon further inspection, this was because I was using 2 threads with 8 I/O’s per thread. Reducing this down to just 1 thread with one I/O operation gave better results.

I still didn’t know whether I could trust it though, so I changed my test.bat to start a perfmon log at the same time, monitoring disk writes/sec, disk reads/sec and disk transfers/sec for the drive.
Then I compared the CUMULATIVE mark from SQLIO to the Max disk transfer/sec in perfmon.

 

 

 

Migrate Mindtouch to Hyper-V

0

My Mindtouch Core wiki VM was originally running on VMWare server a long time ago. I needed to migrate this to Hyper-V so that I could decommission my use of VMware.

I originally wrote this post more than 2 years ago, but am publishing it now in case someone finds it useful.

 

Used vmdk2vhd to convert the disk to VHD file.

After transferring and booting, it failed.

Used these instructions to assist in fixing: http://itproctology.blogspot.ca/2009/04/migrating-debian-from-vmware-esx-to.html

mount -t ext3 /dev/hda1 /root

vi /root/etc /fstab

change sda1 to hda1

vi /root/boot/grub/menu.lst

change sda1 to hda1

Then added a Legacy Network adapter

Then followed these instructions to install Hyper-V integration services

http://www.r2x2.com/install-hyper-v-integration-services-on-debian-5-x/

DPM 2012 R2 and the downsides

0

I’ve been using DPM 2012 R2 for a few months now, having replaced Symantec Backup Exec 2010 due to growing data sizes and increased struggles with tape rotations.

However I’ve found a number of deficiencies with DPM that make me wish we were able to implement something like Veeam instead.

Here’s a short summary of what I need DPM to do better:

  • No deduplication support!
  • Disk volume based system leaves ‘islands of storage’ unusable and inefficient
    • Prevents disk from being shared for other backup purposes such as Hyper-V replication
  • Lack of long-term disk backups
    • Our TechNet reading has shown that since DPM uses VSS it can only take a maximum of 64 snapshots for a protected resource. We’re currently unsure if this applies to VMs as a protected resource
  • Poor visibility into DPM running operations
    • No clarity on what the data transfer represents
    • No information on compression ratios
    • No transfer speed indicators
  • No easy way to see status of data across all protected sources
    • No dashboards or easy summaries.
    • Many clicks to drill down into each protection group
  • Poor configurability on logging
    • Email notifications are very chatty, or non-existent without much middle ground.
    • No escalation methods or schedules
  • No automated test restore capabilities or scheduling
  • Limited Reporting
    • Only 6 reports out of the box, and must use SQL Reporting Services to build anything new (which I am adept with, but that’s besides the point)
  • Tape Library support seems cumbersome, compression isn’t work despite it reporting as running
  • No built in VM replication technology for Disaster Recovery scenarios
  • Very low community knowledge or support
    • For example, trying to find information on tape compression is impossible; no one online is talking about DPM and how it’s used.
  • No central console for viewing multiple backup source/destination pairs

Ajaxplorer

0

### Note: I originally wrote this in 2012, but never published it. I haven’t looked at Ajaxplorer since then for many un-related reasons, but thought I would post it as-is in case it helps someone ###

 

I recently decided to test Ajaxplorer as an FTP replacement. Right now FTP is used for simple file transfers and data uploads, but its not secure, and its not the most user friendly way of transferring files.

Ajaxplorer is attractive because it’s a PHP front end for file transfers with a Windows Explorer-like interface. It can be secured with SSL and offers many other nice features.

I wanted to set up an instance of Ajaxplorer on a spare VM which didn’t have very much storage space. With the help of a forum topic on the Ajaxplorer forums, I managed to set this up with the actual data repositories on a different UNC path.

 

Copy to c on Web server
Create sym link to storage server with Mklink /d
Copy data folder to storage instead of Web
Add URL rewrite module to iis
Add info to Web.config for redirect to https
Add certificate to iis
Modify php.ini with file size and mail smtp options
Modify the boot strap plugin file for ldap authentication plus serial
Add repository for crew ftp
Email: – have master user QCdata, click on individual folder, share it out with the named user, choose read and write, set email address.

Go to Top