window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

The iPad is VDI Ready!

This has been a very cool couple of weeks for the VDI landscape with VMware View. The View client for the iPad that was first seen in a demo at VMword US in 2010 is finally here. Now I know what may have taken them so long.

VMware View 4.6 was also released in the past couple of weeks. With version 4.6 came the ability to use the PCOIP protocol on the VMware View Security Server that sits in your DMZ. This eliminates the need to set up a VPN for the endpoint device to access a desktop pool using the PCOIP protocol from outside your firewall.

I can now see where this functionality would be absolutely necessary to access a View desktop from the iPad. Super-mobile VDI is really cool, but it would have been a drag to only access your desktops over RDP. Also, having to set up a VPN connection from your iPad would go against the ease of use that the iPad offers.

Below is a video demo of the new iPad client. Among some of the coolest features are the virtual laptop track pad and the touch gestures built into the client to take advantage of the iPad functionality.

http://www.youtube.com/watch?v=ldECHtfDyjs

And also some use cases in the field. This one is for Children’s Hospital Central California. I think this is a great use of the technology.

http://www.youtube.com/watch?v=aU0nF_FM–s&feature=related

If you have any questions or would like more information, please contact me. Also, if you would like to see the View Client for iPad in person, we can schedule a demo for you with our VMware View Lab running on our Cisco UCS blade infrastructure.

Solving the Virtual Desktop Puzzle Part 3

In this series we’ve already looked at virtual desktop storage efficiency with “linked clones” and user profile management options. In this post we will discuss another piece of the desktop image that can potentially be offloaded to the network. The applications.

Remember that in a virtual desktop environment one of our goals is to make the “gold” master image as vanilla as possible. We do this by offloading unique components of the desktop off of the image and onto the network. VMware has a way to virtualize your applications so that they can be offloaded onto a network share. This means that the applications can be streamed to the user when they log in to their desktop. So, the desktop becomes disposable and the user gets the appropriate applications when they log into any virtual desktop. So how can we do this?

We do this with a VMware product called ThinApp. It even comes bundled with the VMware View Bundled licensing. ThinApp allows us to package an application as a single executable file. All of the DLL’s and bits that the application requires at runtime are packaged in this single executable file. So, nothing actually gets installed on the desktop in order to run the application. Once the application is packaged it can run from the desktop hard drive, an external hard drive, a cd, a dvd, and even from the network. Basically, if you have an operating system and a place to store the packaged ThinApp’ed application, you can run it.

If you run the packaged application from the network, then each user can have the application streamed to their virtual desktop instance when they log in. There is also the added benefit of the packaged applications running on the appropriate storage tier if we are running a tiered storage solution. So, we’ve taken care of the user profiles and applications to make the desktop image as vanilla as possible. Our user profiles and our applications can be centrally managed along with our desktops. We can now treat multiple desktops as a single pooled unit. No more Microsoft patch Tuesday woes, no more uncontrolled virus or spyware outbreaks, and fewer user desk side trips.

First Ask Harley Session Question and Answer Summary

On Thursday, January 13th I had my first Ask Harley Session. This was a session where I answered virtualization and VMware related questions on Twitter. I received a lot of great questions during this session. Thank you to all who participated. Below are the questions and their answers in case you missed them on Twitter.

 

Ask Harley: Question 1 – What common issues or mistakes do you see with your customers who have setup VMware infrastructure or are looking to setup VMware?

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

 

Question:

What common issues or mistakes do you see with your customers who have setup VMware infrastructure or are looking to setup VMware?

Answer:

Most of the issues in an initial deployment occur from a lack of capacity, application, and infrastructure planning.

Consider the 4 core (CPU, RAM, DISK, NET) resources from a capacity standpoint. Consider application requirements (MS Clustering, Dongles, Vendor Support, Etc.).

Consider scalability and ease of management from the infrastructure standpoint. Infrastructure item examples: Scale up vs scale out(more hosts = more DRS opportunities,Less hosts = more risk).

Details. Details. Details. Example- Do I have enough space for VMDK and Swap files? Do I have a syslog server for ESXi?

Keep it simple. Avoid Resource Pools, Reservations, and Limits unless they are needed.

Resource pools are NOT for organization. That’s worth repeating. Resource pools are NOT for organization. Folders are.

There is more involved in a virtualization design / deployment than clicking next.

 

Ask Harley: Question 2 – Why would you use Virtual Port-ID NLB instead of IP-Hash NLB?

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

 

Question:

Why would you use Virtual Port-ID NLB instead of IP-Hash NLB?

Answer:

The summary answer would be simplicity. Port-ID is the default load balancing and good in a wide range of use cases.

Port-ID Advantage: Simple, effective. Port-ID Disadvantage: Only egress traffic is load balanced as it depends on the source virtual port id

IP-Hash has an upstream dependency on 802.3ad static link aggregation. An example is etherchannel on Cisco Switches. Even if the dependency is met. You may not be load balancing as efficiently as you think. You need MANY destinations in order for IP-Hash maximum effectiveness.

Why? Because IP-Hash algorithm uses an Xor of source and destination IP using the least significant byte (LSB) of both addresses. Then, the modulo of the Xor result is computed over the number of physical NICs.

Formula- (“LSB of Source IP of VM” xor “LSB of Destination IP”) mod “Number of Physical NICs in the team” = Modulo. If the Modulo is the same among two VM’s they will choose the same physical NIC for traffic. If the Modulo is different, they will choose different physical NICs.

IP-Hash Advantage: Ingress and Egress load balancing. IP-Hash Disadvantage: Upstream dependencies. More complexity and planning involved.

For further detail beyond Twitter see Ken Cline’s excellent post on Network Load Balancing here -> http://bit.ly/e7eVK0

 

Ask Harley: Question 3 – What are some of the most difficult parts of the journey to becoming a VCDX?

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

Question:

What are some of the most difficult parts of the journey to becoming a VCDX?

Answer:

I can only speak from my experience on the VCDX journey.

If you are well prepared and study the written portion of the tests (and to some extent the lab), while challenging are nothing compared to the application and defense.

The application and design submission itself requires a significant amount of work.

Whatever you calculate the work effort to be, you may want to double or quadruple it.

I spent about four weeks worth of man-hours on my application and design.

Make sure you meet the application requirements in your design documentation and then go beyond. Leave nothing blank.

Know your design for the defense. This is worth repeating. Know your design cold for the defense. Nothing can prepare you for the defense other than knowing your design and significant field experience.

You don’t know what the panelists will throw at you, so you must have a breadth of knowledge.

By far the most challenging of all may be getting a handle on your nerves during the panel defense.

My detailed experience is here -> http://goo.gl/Y0tPD

There is also a nice roundup of experiences here -> http://bit.ly/eeycor

 

Ask Harley: Question 4 – Are there significant performance advantages on ESXi using Dell Equalogic MPIO drivers over native VMware Round Robin drivers?

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

Question:

Are there significant performance advantages on ESXi using Dell Equalogic MPIO drivers over native VMware Round Robin drivers?

Answer:

I have not tested the performance of the Dell Equalogic MPIO drivers and a quick search did not net any official benchmarks. In general, a properly implemented and tested third party MPIO solution like the Equalogic MEM or Powerpath VE should be better at making path selections.

The storage vendor should be more familiar with its array than VMware. I have experience with Powerpath VE and the installation is fairly easy. Once it is installed there is very little management besides just updating the software from time to time.

Any other third party plugin should have a similar ease of use / management story.Consult the vendor.

I did find one unofficial performance testing post here -> http://bit.ly/hkbFv8

 

Ask Harley: Question 5 – What about using multiple vendor MPIO drivers, have you ever experienced issues in a mixed environment?

 

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

 

Question:

What about using multiple vendor MPIO drivers, have you ever experienced issues in a mixed environment?

 

Answer:

 

I have not tested using multiple MPIO drivers.

However, I would not recommend that scenario as a host can only have one path selection policy.

If you have multiple hosts using different path selection policies, then performance or availability can be impacted. You should always use the same path selection policy for all of the hosts in a cluster to avoid path confusion.

Consistency is the key in a virtual infrastructure

 

Ask Harley: Question 6: With Cisco load balancers in places, do I still specify direct URL’s to the two security servers for #VMwareView or use LB URL?

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

Question:

With Cisco load balancers in places, do I still specify direct URL’s to the two security servers for #VMwareView or use LB URL?

Answer:

With Cisco load balancers in front of the view security servers, you would specify the load balancer URL.

Solving the Virtual Desktop Puzzle Part 2

In part 1 of this series, we explored the possibilities that VMware View’s linked clones technology unlocks. We can begin move closer to deploying a single “gold” image with this technology and managing only that “gold” image. That is a very powerful prospect. However, if we truly want to get to that state, some other items in the image need to be offloaded. This post will discuss strategies to offload the user data from the virtual desktop images.

First, let’s define what typically can be found as part of the user data.

  • My documents
  • Desktop
  • Application Data
  • Shortcuts
  • Basically any “user” customization data that makes that desktop unique to the user

If the user data is part of the virtual desktop image, then the virtual desktop is not disposable (from the point of view of the user, at least 🙂 ). We need to store the user data somewhere else if we do not want to lose it if the virtual desktop is refreshed, recomposed, or provisioned again. There are several ways to tackle this particular design consideration. Let’s go over a few of them.

First, the built in Windows methods.

Roaming Profiles

Pros:

  • Built in to Windows
  • Well understood
  • Capable of offloading the entire user profile, including files for third party applications (e.g. Favorites for third party browsers like Firefox.)

Cons:

  • Downloads the entire user profile every time a user logs on
  • Large profiles can cause very long logon times for users
  • The virtual disk on the virtual desktop image will grow with the profile data every time a user logs on
  • Cannot really be monitored for consistency or functionality
  • May be problematic when upgrading from an older Operating System (like Windows XP) to a new Operating System (like Windows 7) due to profile incompatibilities.

Even though it is the first listed, I would actually recommend roaming profiles as a last resort. Long time Windows administrators know the frustrations of roaming profiles. Dealing with roaming profile problems may lessen the operational efficiencies gained by deploying a virtual desktop environment in the first place.

Folder Redirection

Pros:

  • Built in to Windows
  • Well understood
  • Folders redirected truly reside completely off of the virtual desktop image
  • Logon times are not an issue like they can be with roaming profiles

Cons:

  • Does not take care of the entire user profile. Third party application customizations (like Favorites for third party browsers like Firefox) may or may not be redirected depending on where that data is stored.
  • Cannot really be monitored for consistency or functionality
  • May be problematic when upgrading from an older Operating System (like Windows XP) to a new Operating System (like Windows 7) due to folder differences.

I have used Folder Redirection many times in different environments. When set up properly it works reasonably well. My wish list for improvement would be the ability to audit when a user does not have their folder redirected to avoid any user data loss.

Outside of built in Windows solutions, there are several third party solutions that are trying to tackle the “user identity” offloading consideration. These solutions vary in functionality, complexity, and price. So, I will just list the general Pros and Cons with this category of software solution.

Third Party Profile Management

Pros:

  • Profile management is what it does. It had better be good at it 🙂
  • May have more robust monitoring of the user data for consistency and functionality
  • May have the ability to seamlessly migrate user data from an older Operating System (like Windows XP) to a newer Operating System (like Windows 7)
  • Can be a more robust profile management solution vs. built in Windows tools
  • Will likely scale more efficiently than built in Windows tools

Cons:

  • May add more complexity
  • Added price
  • Not all profile management is created equal, research must be done to ensure that the solution fits the need for your environment. (At least with Roaming Profiles and Folder redirection you know exactly what you are getting and not getting)

As you can see, we must offload user data if the virtual desktop environment is going to be as efficient as possible. Fortunately, there are many ways to accomplish this goal. Part 3 of this series will go over offloading the applications from the virtual desktop image. Until then, if you have any comments or questions feel free to post them.

Solving the Virtual Desktop Puzzle Part 1

There is no doubt that desktop virtualization can bring greater operational efficiencies to many businesses. However, one needs to design for more than just pure desktop consolidation to gain the most from this technology. There are three general components that make up a typical desktop environment. These are the Operating System, User Data, and Applications. By separating these components, each one can be managed distinctively without affecting the the other two.

This post will specifically address how technologies within VMware View can be used to better manage the Operating System in your virtual desktop environment.

A virtual desktop infrastructure with VMware View allows you to maintain multiple desktops in a pool as a single, disposable unit. This functionality is enabled by the VMware linked clone technology. Below is a diagram of what a desktop pool may look like without linked clones. Each virtual desktop is a separate 20GB image. This means that 100GB of disk space must be used to house the virtual desktop images. Also, the virtual desktops are managed almost the same way that physical desktops are managed.

image

 

 

 

 

 

 

 

 

 

 

 

 

Linked clones, on the other hand, allow you to manage the virtual desktops in a much more efficient manner. Below is a diagram of how a linked clone desktop pool might look.

 

image

 

 

 

 

 

 

Here, there is a master image (20GB) and the linked clone virtual disks (1GB each) are based off of that master image. Not only does this save a significant amount of disk space, but it also allows you to manage the entire desktop pool as a single entity. For example, when you make a change (such as patching on Microsoft Patch Tuesday) to the master image, the linked clones can get that change as well. No more managing patches on a per desktop basis. You can even take a new snapshot of the master image before you patch and then point back to the pre-patched image if additional testing needs to be done. The workflow goes like this:

  • Keep the original master image snapshot
  • Patch the master image and create a new snapshot based on that patching
  • Point the desktop pool to the new snapshot
  • Have your users log off their desktops and log back in
  • They now have their new patches
  • Spend the rest of Patch Tuesday doing something besides babysitting Microsoft patches

This is just one example of the flexibility that VMware View can bring to desktop management. In order to streamline this process as much as possible, we want the master desktop image to be as vanilla as possible. To do that we need strategies to address applications that the users need and user data. Parts 2 and 3 of this series will address those portions of the desktop. Until then, if you have any comments or questions feel free to post them.

Cisco Nexus 1000v and vSphere HA Slot Sizes

When implementing the Cisco Nexus 1000v, High Availability (HA) Slot Sizes can be affected on your vSphere Cluster. HA slot sizes are used to calculate failover capacity for HA when the “Host Failures” HA setting is used. By default the slot size for CPU is the highest reservation of CPU among all VM’s in the cluster (or 256 MHz if no per VM reservations exist). The slot size for Memory is the highest reservation of Memory among all VM’s in the cluster (or 0 MB + Memory Overhead if no per VM reservations exist). If you want to really dive further into HA and slot size calculations, I would highly recommend reading Duncan Epping’s HA Deepdive at Yellow-Bricks.com.

image

Each Cisco Nexus 1000v Virtual Supervisor Module (VSM) has a reservation of 1500 MHz on the CPU and 2048 MB on the Memory so that these resources can be guaranteed for the VSM’s. So, the slot size will be affected accordingly (1500 MHz for CPU and 2048 MB for Memory).

image

If you do not want your slot size affected and you do not want to enable the advanced slot size parameters das.slotCpuInMHz or das.slotMemInMB there is another solution that will allow you to still guarantee resources for the Nexus 1000v VSM’s and maintain smaller slot sizes.

Create a separate resource pool for each VSM with a CPU reservation of 1500 MHz and Memory reservation of 2048 MB.

image

 

image

Since slot sizes are only affected by per VM reservations, the resource pools will give you the desired results without increasing your HA slot sizes. I wouldn’t recommend doing this with too many VM’s as this turn into a management nightmare very quickly. However, for the two VSM’s in a Cisco Nexus 1000v infrastructure it is manageable.

Now your Nexus 1000v VSM’s are happy and you are not wasting resources in your HA cluster by calculating larger slot sizes than are required.

 

References

http://www.yellow-bricks.com/vmware-high-availability-deepdiv/

http://frankdenneman.nl/2010/02/resource-pools-and-avoiding-ha-slot-sizing/

Backing up the vCenter 4.x AD LDS Instance

vCenter is one of the most important components of your vSphere 4.x virtual infrastructure. Many advanced capabilities of vSphere 4 (vMotion, DRS, etc.) are not available without vCenter. Prior to vSphere 4.x, it was sufficient to backup the vCenter database and restore vCenter by building a new vCenter server, restoring the database, and reinstalling vCenter to attach to the restored database.

With the introduction of vSphere 4.x, vCenter 4.x started using Active Directory Application Mode (ADAM) on Windows Server 2003 and Active Directory Lightweight Directory Services (AD LDS) on Windows Server 2008 to accommodate Linked Mode for vCenter. The roles and permissions are stored in the ADAM or AD LDS database. In order to restore the roles and permissions, the ADAM or AD LDS database must be backed up.

VMware KB1023985 tells you that you need to back up the SSL certificates, vCenter Database, and the ADAM/AD LDS database. There are many well-known ways to back up a SQL database. However, backing up an AD LDS instance is a lesser known procedure. The following PowerShell script will back up the the AD LDS VMware Instance on Server 2008 and the SSL folder. As always, test it thoroughly before using it.
[powershell]#
# Name: VC_ADAM_SSL_Backup.ps1
# Author: Harley Stagner
# Version: 1.0
# Date: 08/17/2010
# Comment: PowerShell script to backup AD LDS
#          and SSL folder for vCenter
#
# Thanks to Tony Murray for the AD LDS portion of the
# script.
#
#
#########################################################

# Declare variables
$BackupDir = “C:backupVMwareBackup”
$SSLDir = $env:AllUsersProfile + “VMwareVMware VirtualCenterSSL”
$IFMName = “adamntds.dit”
$cmd = $env:SystemRoot + “system32dsdbutil.exe”
$flags = “`”activate instance VMwareVCMSDS`” ifm `”create full C:backupVMwareBackup`” quit quit”
$date = Get-Date -f “yyyymmdd”
$backupfile = $date + “_adamntds.bak”
$DumpIFM = “{0} {1}” -f $cmd,$Flags
$ServiceVCWeb = “vctomcat”
$ServiceVCServer = “vpxd”

# Main
Stop-Service $ServiceVCWeb
Stop-Service $ServiceVCServer -force
# Create the folder if it doesn’t exist
if(Test-Path -path $BackupDir)
{Write-Host “The folder” $BackupDir “already exists”}
else
{New-Item $BackupDir -type directory}
# Clear the IFM folder (Dsdbutil needs folder to be empty before writing to it)
Remove-Item $BackupDir* -recurse

# Run Dsdbutil.exe to create the IFM dump file
Invoke-Expression $DumpIFM
# Rename the dump file to give the backup a unique name

Rename-Item $BackupDir””$IFMName -newname $backupfile
Copy-Item $SSLDir $BackupDir -recurse
Start-Service $ServiceVCWeb
Start-Service $ServiceVCServer

# End Main[/powershell]
This script utilizes the dsdbutil.exe utility on Windows Server 2008 to backup the AD LDS instance and SSL folder after it stops the vCenter services. By default it backs these items to “C:backupVMwareBackup”. Change it to your liking.

Now to restore the AD LDS instance data, follow the directions at Technet.

References

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1023985

http://technet.microsoft.com/en-us/library/cc725665%28WS.10%29.aspx

http://www.open-a-socket.com/index.php/category/ad-lds-adam/

Have to Versus Want to

In both professional and personal life, there are things we like to do and there are things we have to do. We have to pay taxes. We have to pay rent for our office space. We have to keep accurate records of our business transactions. We like to engage with our clients on innovative projects. We like the the work flexibility of remote access technology. We like going to a college football game on a crisp autumn afternoon. Based on my conversations with clients over the past few years, one of the things that is definitely falling in the “have to do it” versus the “like to do it” category is running corporate email systems.

Email has evolved to one of the, if not the most important business applications for many organizations. As email has evolved from simple communications that was a nice complement to the phone system, its complexity, its impact, its support requirements, and therefore its costs have grown exponentially. The “wow factor” for using email is gone. The “fun factor” of supporting email is definitely gone.

The good news in this development is that there is no reason to deploy, manage, or run email systems in the cumbersome and expensive ways of the past. Email systems can be deployed in one of three ways: 1) they can deployed in a traditional architecture with on-premise physical assets dedicated to email and managed by local staff. 2) Email systems can be virtualized either on site or remotely and run as file systems managed on a shared virtual infrastructure. 3) Email can be deployed in the cloud, as a service with one of several providers.

TBL is a big fan of 2 of these 3 options. Frankly, we hate to see any of our clients investing in a traditional email architecture. We believe that the leverage from either a virtualized email infrastructure or email-as-a-service from a provider like Cisco Mail is a far better use of scarce capital and resources. Without going into a long dissertation, virtualized email can either be a snap-on to existing virtual operations or can be an initial virtualization project that can be leveraged across the enterprise for exceptional returns to the business. Outsourcing email to a provider like Cisco Mail can provide users with the Outlook experience users know and love, but exports all of the capital investments and support activity to a “cloud” provider. Remember how much fun it was the last time your Exchange environment crashed? Remember the pleasant conversations with the end user community while your entire team dropped what they were doing and scurried to get email back up and running? In a Cisco Mail scenario you get to export all of the pandemonium of an email crash to your provider. In a virtualized email scenario, your email system is a file, like all of your other virtualized systems, and can be restarted off a boot from SAN to any available server resource. Both of these options sound a lot better to me than an entire I/T staff running around with their hair on fire scrambling to get email back up and running.

Since email is no longer something we like to do, but is unquestionably something we have to do, why not optimize how we deliver email services through the miracles of either virtual infrastructure or cloud services? Then maybe we can enjoy the college football season a little more.

Hype Meets Reality

It is nice to see the transformation of hype into reality. As someone who has spent a lot of years in technology sales and marketing, hype sometimes has a negative connotation in our industry. There have been times when hype has gotten ahead of technology’s ability to deliver and times when hype has led to misplaced expectations for what new technology can in fact deliver. However technology hype (I think enthusiasm might be a better term) is not a bad thing. It creates excitement for and focuses attention on new innovations that have potential to improve business operations and service delivery. Hype is always going to be ahead of reality… it is important that far more often than not, reality needs to rise to meet the expectations of  hype (enthusiasm)

The hype (or enthusiasm) for cloud computing does not need any more gas dumped on the raging fire. I think cloud computing instead needs to see the foundations of business benefit beginning to take shape in the form of successful projects and realized business benefits. I am very pleased to report that we are starting to see the reality of cloud computing supporting the sky-high enthusiasm and expectations.

One TBL client in particular has, in essentially 120 days, transformed their entire operations from a quintessential 1990’s operation into a fully operational private cloud, and the business benefits are starting to flow. What has changed and improved? Lots. A better question might be what hasn’t? Tactical/transactional costs to “add more of the same” to the prior infrastructure  that were very high are now gone. Growth by acquisition was stymied by an infrastructure that was inflexible and difficult at best to scale. In fact growth of any kind was expensive to support and even more expensive if not impossible to maintain with any degree of integrity. Now, the private cloud is a facilitator of cost-effective growth and is a point of leverage for future acquisitions. Business recovery services that were simply an auditor’s check-mark in the past will be fully dynamic and deployable in the coming 60 days. Cost effective growth is now standard operating procedure. Business differentiating agility is now baked into the business through the private cloud. This company can now turn on a dime, reacting to new needs in the market and deploying new services in record time.

None of this happened by accident. New leadership and staff has shown remarkable foresight in developing a winning strategy combined with superb ability to execute. TBL engineers have helped architect and deliver replicable best practices in deploying a private cloud. And not to be overlooked, the technology that delivers the private cloud has lived up to every iota of its hype, its potential, and its expectations. Cloud is certainly the biggest hype of the past several years. It is gratifying and exciting to see hype morph into reality for our clients and see business benefits emerge as promised.

Death toll for Cisco Unity?

With the upcoming release of Unity Connection 8.5 from Cisco, a number of questions are brought to light. A single feature in this release, in my opinion, marks the end of our need for Unity. It is with a heavy heart and many fond memories that I’ll watch her sail into the sunset.

This feature is coined ‘Single Inbox’ though we all know it as a unified inbox. Prior to release 8.5, the only way to manage voicemails stored on a Connection server was by use of an IMAP connection. (a finicky IMAP connection I might add seeing as it wouldn’t work from mobile phones and really any device or application outside of Outlook)

Using WebDav for Exchange 2003 or EWS for Exchange 2007 or 2010, a user can now see and manage voicemails from their exchange mailbox exactly like Unity UM provides.

Now the key here is that Unity had only provided this functionality at some higher costs and risks. Not only did Unity require a schema update to install, it has been plagued by permissions and cohabitation issues from the beginning. Now, to be clear, none of these issues are product or development problems. Actually it couldn’t be further from that. Jeff Lindborg and his team have consistently developed some of the best well crafted products and management tools I’ve ever seen.

In fact, nearly all issues are self inflicted. Think about it, with Unity UM, the active directory, exchange, windows server, or the security teams all have the ability to ‘affect’ Unity by way of it’s many dependancies.

With Unity Connection’s new release it now has caught up with all Unity features and is provided through an hardened appliance which removes nearly all of the dependencies mentioned above.

…and the migration ain’t that bad either. Cisco deserves an atta’ boy on this one.