window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

Solving the Virtual Desktop Puzzle Part 1

There is no doubt that desktop virtualization can bring greater operational efficiencies to many businesses. However, one needs to design for more than just pure desktop consolidation to gain the most from this technology. There are three general components that make up a typical desktop environment. These are the Operating System, User Data, and Applications. By separating these components, each one can be managed distinctively without affecting the the other two.

This post will specifically address how technologies within VMware View can be used to better manage the Operating System in your virtual desktop environment.

A virtual desktop infrastructure with VMware View allows you to maintain multiple desktops in a pool as a single, disposable unit. This functionality is enabled by the VMware linked clone technology. Below is a diagram of what a desktop pool may look like without linked clones. Each virtual desktop is a separate 20GB image. This means that 100GB of disk space must be used to house the virtual desktop images. Also, the virtual desktops are managed almost the same way that physical desktops are managed.

image

 

 

 

 

 

 

 

 

 

 

 

 

Linked clones, on the other hand, allow you to manage the virtual desktops in a much more efficient manner. Below is a diagram of how a linked clone desktop pool might look.

 

image

 

 

 

 

 

 

Here, there is a master image (20GB) and the linked clone virtual disks (1GB each) are based off of that master image. Not only does this save a significant amount of disk space, but it also allows you to manage the entire desktop pool as a single entity. For example, when you make a change (such as patching on Microsoft Patch Tuesday) to the master image, the linked clones can get that change as well. No more managing patches on a per desktop basis. You can even take a new snapshot of the master image before you patch and then point back to the pre-patched image if additional testing needs to be done. The workflow goes like this:

  • Keep the original master image snapshot
  • Patch the master image and create a new snapshot based on that patching
  • Point the desktop pool to the new snapshot
  • Have your users log off their desktops and log back in
  • They now have their new patches
  • Spend the rest of Patch Tuesday doing something besides babysitting Microsoft patches

This is just one example of the flexibility that VMware View can bring to desktop management. In order to streamline this process as much as possible, we want the master desktop image to be as vanilla as possible. To do that we need strategies to address applications that the users need and user data. Parts 2 and 3 of this series will address those portions of the desktop. Until then, if you have any comments or questions feel free to post them.

Still Fighting to Cure Cancer

Back on July 1, 2010, TBL Networks turned on a 40 machine Virtual Desktop Infrastructure (VDI) environment on our Cisco Unified Computing System (UCS) dual B-series blade farm and dedicated all the spare computing power of the system to the World Community Grid and thier efforts to find a cure for cancer. The World Community Grid’s mission is to create the world’s largest public computing grid to tackle projects that benefit humanity. Their work has developed the technical infrastructure that serves as the grid’s foundation for scientific research, and its success depends upon individuals collectively contributing their unused computer time to change the world for the better. When idle, a memeber computer will request data on a specific project from World Community Grid’s server. It will then perform computations on this data, send the results back to the server, and ask the server for a new piece of work. Each computation that a computer performs provides scientists with critical information that accelerates the pace of research. Based on some personal connections at TBL, we decided to focus our unused capacity primarily on the fight against cancer, but our compute resources are available to all Community Grid projects when the cancer research teams are not sending work to TBL.

Shortly after joining the grid computing team, we posted a blog entry on our first two weeks of results at a little over 200 compute days. We estimated then that by the end of 2010, we would have provided a little over 7 years of results. With 10 days left in the calendar year, I am proud to report that TBL has donated 7 years and 201 days of computing power using Cisco UCS, which has allowed us to return 31,882 results to cancer researchers ranking us #1,777 of the over 300,000 organizations participating on the World Community Grid. In the spirit of the season, TBL received a warm “Thank You” from the World Community Grid today for all of our efforts this year, noting that our contribution to advancing research and helping humanity is making a difference every day.