window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

Solving the Virtual Desktop Puzzle Part 1

There is no doubt that desktop virtualization can bring greater operational efficiencies to many businesses. However, one needs to design for more than just pure desktop consolidation to gain the most from this technology. There are three general components that make up a typical desktop environment. These are the Operating System, User Data, and Applications. By separating these components, each one can be managed distinctively without affecting the the other two.

This post will specifically address how technologies within VMware View can be used to better manage the Operating System in your virtual desktop environment.

A virtual desktop infrastructure with VMware View allows you to maintain multiple desktops in a pool as a single, disposable unit. This functionality is enabled by the VMware linked clone technology. Below is a diagram of what a desktop pool may look like without linked clones. Each virtual desktop is a separate 20GB image. This means that 100GB of disk space must be used to house the virtual desktop images. Also, the virtual desktops are managed almost the same way that physical desktops are managed.

image

 

 

 

 

 

 

 

 

 

 

 

 

Linked clones, on the other hand, allow you to manage the virtual desktops in a much more efficient manner. Below is a diagram of how a linked clone desktop pool might look.

 

image

 

 

 

 

 

 

Here, there is a master image (20GB) and the linked clone virtual disks (1GB each) are based off of that master image. Not only does this save a significant amount of disk space, but it also allows you to manage the entire desktop pool as a single entity. For example, when you make a change (such as patching on Microsoft Patch Tuesday) to the master image, the linked clones can get that change as well. No more managing patches on a per desktop basis. You can even take a new snapshot of the master image before you patch and then point back to the pre-patched image if additional testing needs to be done. The workflow goes like this:

  • Keep the original master image snapshot
  • Patch the master image and create a new snapshot based on that patching
  • Point the desktop pool to the new snapshot
  • Have your users log off their desktops and log back in
  • They now have their new patches
  • Spend the rest of Patch Tuesday doing something besides babysitting Microsoft patches

This is just one example of the flexibility that VMware View can bring to desktop management. In order to streamline this process as much as possible, we want the master desktop image to be as vanilla as possible. To do that we need strategies to address applications that the users need and user data. Parts 2 and 3 of this series will address those portions of the desktop. Until then, if you have any comments or questions feel free to post them.

Still Fighting to Cure Cancer

Back on July 1, 2010, TBL Networks turned on a 40 machine Virtual Desktop Infrastructure (VDI) environment on our Cisco Unified Computing System (UCS) dual B-series blade farm and dedicated all the spare computing power of the system to the World Community Grid and thier efforts to find a cure for cancer. The World Community Grid’s mission is to create the world’s largest public computing grid to tackle projects that benefit humanity. Their work has developed the technical infrastructure that serves as the grid’s foundation for scientific research, and its success depends upon individuals collectively contributing their unused computer time to change the world for the better. When idle, a memeber computer will request data on a specific project from World Community Grid’s server. It will then perform computations on this data, send the results back to the server, and ask the server for a new piece of work. Each computation that a computer performs provides scientists with critical information that accelerates the pace of research. Based on some personal connections at TBL, we decided to focus our unused capacity primarily on the fight against cancer, but our compute resources are available to all Community Grid projects when the cancer research teams are not sending work to TBL.

Shortly after joining the grid computing team, we posted a blog entry on our first two weeks of results at a little over 200 compute days. We estimated then that by the end of 2010, we would have provided a little over 7 years of results. With 10 days left in the calendar year, I am proud to report that TBL has donated 7 years and 201 days of computing power using Cisco UCS, which has allowed us to return 31,882 results to cancer researchers ranking us #1,777 of the over 300,000 organizations participating on the World Community Grid. In the spirit of the season, TBL received a warm “Thank You” from the World Community Grid today for all of our efforts this year, noting that our contribution to advancing research and helping humanity is making a difference every day.

Unified Computing ATP

Cisco Recognizes TBL Networks Data Center Unified Computing Qualifications

Congratulations to TBL Networks, Inc. for meeting all ATP program requirements and criteria necessary to earn the designation of Cisco ATP – Data Center Unified Computing Partner in the USA.

TBL Networks, Inc. has met the rigorous Cisco certified personnel levels required for a ATP – Data Center Unified Computing Partner. This helps ensure that TBL Networks, Inc. sales and support organizations are better prepared to properly sell, design, install, and support the ATP program specific technology and products.

This is an outstanding accomplishment for TBL Networks, Inc. and demonstrates their desire to develop expertise in this market. TBL Networks, Inc. and the Cisco account management team will continue working together to develop and enhance their mutual capabilities to support TBL Networks, Inc. and its customers.

TBL Networks, Inc. will be recognized for this specialization in the Cisco Partner Locator, located at: http://tools.cisco.com/WWChannels/LOCATR/jsp/partner_locator.jsp

Cisco values the commitment and expertise that TBL Networks, Inc. has demonstrated and looks forward to working together.

Cisco Partner Locator

TBL Networks Runs with the Mechanicsville Miler

If you live in Richmond, chances are that you have heard of the Ukrop’s Monument Avenue 10K or the Suntrust Richmond Marathon. While it might not be as famous, you can add the Mechanicsville Miler to the list of great Richmond road races.

On October 30, over 200 participants, including students, teachers, parents and friends, ran the three mile race at Mechanicsville Elementary School. Student s also had the chance to complete in a one mile fun and a 100-yard dash.

Mechanicsville Miler TBL Networks II

In addition to the competition, students learned about ways to exercise and eat properly at the Health and Fitness fair. Finally, a silent auction was conducted to benefit special projects at the school.

Mechanicsville Miler TBL Networks I

Mechanicsville Elementary assistant principal Alicia Todd said the event “showcases our fantastic school, family and community support.”

TBL Networks was very happy to receive the privilege to be a Gold Level sponsor for the 2010 Mechanicsville Miler.
To see results and photos, go to the official Mechanicsville Miler page.

TBL Steps Up for Down Syndrome

TBL Networks was recently given the honor of working with the Down Syndrome Association of Greater Richmond (DSAGR) at the First Annual “Step UP for DOWN Syndrome Night” on Friday, September 17.   The event is one of several sponsored by DSAGR to help raise awareness for persons with Down Syndrome in the Greater Richmond area.

 

Over 120 guests enjoyed hors d’oeuvres, cocktails, dancing and a Silent Auction inside the beautiful ballroom at the Westin Hotel in Richmond, Virginia.  The event was a great success as $15,500 was raised for DSAGR. The proceeds will allow DSAGR to fund various programs and services, including education and support for new expecting parents, scholarships for Camp Easter Seals and monthly social opportunities for teens with Down Syndrome.

 

The Down Syndrome Association of Greater Richmond (DSAGR), is a non-profit, 501c3 organization that includes parents, family members, friends and professionals working together to improve the quality of life of persons with Down syndrome.

To learn more about DSAGR or to make a tax deductible donation, please visit their official website.

Adv. Routing & Switching

Cisco Recognizes TBL Networks Advanced Routing & Switching Qualifications

Congratulations to TBL Networks, Inc. for meeting all criteria to achieve the Advanced Routing & Switching Specialization. TBL Networks, Inc. has met the resource requirements for the Advanced Routing & Switching Specialization and demonstrated that they are qualified to support customers with their Routing & Switching needs in the USA.

TBL Networks, Inc. will be recognized for this specialization in the Cisco Partner Locator, located at: http://tools.cisco.com/WWChannels/LOCATR/jsp/partner_locator.jsp

Cisco values the commitment and expertise that TBL Networks, Inc. has demonstrated and looks forward to working together.

Cisco Partner Locator

Harley Stagner Achieves VCDX

VMware Recognizes Harley Stagner as a Virtualization Expert with VCDX #46

TBL’s own Harley Stagner has become the first VMware Certified Design Expert (VCDX) in Virginia and only the 46th to achieve this elite virtualization certification worldwide. The VCDX is the highest level of VMware certification. After being invited through a pre-qualification process from eligible VMware Certified Professional (VCP) individuals, Harley had to pass two written examinations to make it to the final stage of the process. Harley traveled to VMware’s headquarters in Palo Alto, CA to defend his virtualization design and implementation plan to a panel of VMware experts on August 26th. Today, Harley was notified of his successful completion of the defense, and awarded VCDX #46. All of us here at TBL are so very proud of Harley’s accomplishment. His new designation is a testament to the talent that Harley lends to the TBL team.

Congratulations to Harley Stagner, VCDX #46.

Cisco Nexus 1000v and vSphere HA Slot Sizes

When implementing the Cisco Nexus 1000v, High Availability (HA) Slot Sizes can be affected on your vSphere Cluster. HA slot sizes are used to calculate failover capacity for HA when the “Host Failures” HA setting is used. By default the slot size for CPU is the highest reservation of CPU among all VM’s in the cluster (or 256 MHz if no per VM reservations exist). The slot size for Memory is the highest reservation of Memory among all VM’s in the cluster (or 0 MB + Memory Overhead if no per VM reservations exist). If you want to really dive further into HA and slot size calculations, I would highly recommend reading Duncan Epping’s HA Deepdive at Yellow-Bricks.com.

image

Each Cisco Nexus 1000v Virtual Supervisor Module (VSM) has a reservation of 1500 MHz on the CPU and 2048 MB on the Memory so that these resources can be guaranteed for the VSM’s. So, the slot size will be affected accordingly (1500 MHz for CPU and 2048 MB for Memory).

image

If you do not want your slot size affected and you do not want to enable the advanced slot size parameters das.slotCpuInMHz or das.slotMemInMB there is another solution that will allow you to still guarantee resources for the Nexus 1000v VSM’s and maintain smaller slot sizes.

Create a separate resource pool for each VSM with a CPU reservation of 1500 MHz and Memory reservation of 2048 MB.

image

 

image

Since slot sizes are only affected by per VM reservations, the resource pools will give you the desired results without increasing your HA slot sizes. I wouldn’t recommend doing this with too many VM’s as this turn into a management nightmare very quickly. However, for the two VSM’s in a Cisco Nexus 1000v infrastructure it is manageable.

Now your Nexus 1000v VSM’s are happy and you are not wasting resources in your HA cluster by calculating larger slot sizes than are required.

 

References

http://www.yellow-bricks.com/vmware-high-availability-deepdiv/

http://frankdenneman.nl/2010/02/resource-pools-and-avoiding-ha-slot-sizing/

Backing up the vCenter 4.x AD LDS Instance

vCenter is one of the most important components of your vSphere 4.x virtual infrastructure. Many advanced capabilities of vSphere 4 (vMotion, DRS, etc.) are not available without vCenter. Prior to vSphere 4.x, it was sufficient to backup the vCenter database and restore vCenter by building a new vCenter server, restoring the database, and reinstalling vCenter to attach to the restored database.

With the introduction of vSphere 4.x, vCenter 4.x started using Active Directory Application Mode (ADAM) on Windows Server 2003 and Active Directory Lightweight Directory Services (AD LDS) on Windows Server 2008 to accommodate Linked Mode for vCenter. The roles and permissions are stored in the ADAM or AD LDS database. In order to restore the roles and permissions, the ADAM or AD LDS database must be backed up.

VMware KB1023985 tells you that you need to back up the SSL certificates, vCenter Database, and the ADAM/AD LDS database. There are many well-known ways to back up a SQL database. However, backing up an AD LDS instance is a lesser known procedure. The following PowerShell script will back up the the AD LDS VMware Instance on Server 2008 and the SSL folder. As always, test it thoroughly before using it.
[powershell]#
# Name: VC_ADAM_SSL_Backup.ps1
# Author: Harley Stagner
# Version: 1.0
# Date: 08/17/2010
# Comment: PowerShell script to backup AD LDS
#          and SSL folder for vCenter
#
# Thanks to Tony Murray for the AD LDS portion of the
# script.
#
#
#########################################################

# Declare variables
$BackupDir = “C:backupVMwareBackup”
$SSLDir = $env:AllUsersProfile + “VMwareVMware VirtualCenterSSL”
$IFMName = “adamntds.dit”
$cmd = $env:SystemRoot + “system32dsdbutil.exe”
$flags = “`”activate instance VMwareVCMSDS`” ifm `”create full C:backupVMwareBackup`” quit quit”
$date = Get-Date -f “yyyymmdd”
$backupfile = $date + “_adamntds.bak”
$DumpIFM = “{0} {1}” -f $cmd,$Flags
$ServiceVCWeb = “vctomcat”
$ServiceVCServer = “vpxd”

# Main
Stop-Service $ServiceVCWeb
Stop-Service $ServiceVCServer -force
# Create the folder if it doesn’t exist
if(Test-Path -path $BackupDir)
{Write-Host “The folder” $BackupDir “already exists”}
else
{New-Item $BackupDir -type directory}
# Clear the IFM folder (Dsdbutil needs folder to be empty before writing to it)
Remove-Item $BackupDir* -recurse

# Run Dsdbutil.exe to create the IFM dump file
Invoke-Expression $DumpIFM
# Rename the dump file to give the backup a unique name

Rename-Item $BackupDir””$IFMName -newname $backupfile
Copy-Item $SSLDir $BackupDir -recurse
Start-Service $ServiceVCWeb
Start-Service $ServiceVCServer

# End Main[/powershell]
This script utilizes the dsdbutil.exe utility on Windows Server 2008 to backup the AD LDS instance and SSL folder after it stops the vCenter services. By default it backs these items to “C:backupVMwareBackup”. Change it to your liking.

Now to restore the AD LDS instance data, follow the directions at Technet.

References

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1023985

http://technet.microsoft.com/en-us/library/cc725665%28WS.10%29.aspx

http://www.open-a-socket.com/index.php/category/ad-lds-adam/

Have to Versus Want to

In both professional and personal life, there are things we like to do and there are things we have to do. We have to pay taxes. We have to pay rent for our office space. We have to keep accurate records of our business transactions. We like to engage with our clients on innovative projects. We like the the work flexibility of remote access technology. We like going to a college football game on a crisp autumn afternoon. Based on my conversations with clients over the past few years, one of the things that is definitely falling in the “have to do it” versus the “like to do it” category is running corporate email systems.

Email has evolved to one of the, if not the most important business applications for many organizations. As email has evolved from simple communications that was a nice complement to the phone system, its complexity, its impact, its support requirements, and therefore its costs have grown exponentially. The “wow factor” for using email is gone. The “fun factor” of supporting email is definitely gone.

The good news in this development is that there is no reason to deploy, manage, or run email systems in the cumbersome and expensive ways of the past. Email systems can be deployed in one of three ways: 1) they can deployed in a traditional architecture with on-premise physical assets dedicated to email and managed by local staff. 2) Email systems can be virtualized either on site or remotely and run as file systems managed on a shared virtual infrastructure. 3) Email can be deployed in the cloud, as a service with one of several providers.

TBL is a big fan of 2 of these 3 options. Frankly, we hate to see any of our clients investing in a traditional email architecture. We believe that the leverage from either a virtualized email infrastructure or email-as-a-service from a provider like Cisco Mail is a far better use of scarce capital and resources. Without going into a long dissertation, virtualized email can either be a snap-on to existing virtual operations or can be an initial virtualization project that can be leveraged across the enterprise for exceptional returns to the business. Outsourcing email to a provider like Cisco Mail can provide users with the Outlook experience users know and love, but exports all of the capital investments and support activity to a “cloud” provider. Remember how much fun it was the last time your Exchange environment crashed? Remember the pleasant conversations with the end user community while your entire team dropped what they were doing and scurried to get email back up and running? In a Cisco Mail scenario you get to export all of the pandemonium of an email crash to your provider. In a virtualized email scenario, your email system is a file, like all of your other virtualized systems, and can be restarted off a boot from SAN to any available server resource. Both of these options sound a lot better to me than an entire I/T staff running around with their hair on fire scrambling to get email back up and running.

Since email is no longer something we like to do, but is unquestionably something we have to do, why not optimize how we deliver email services through the miracles of either virtual infrastructure or cloud services? Then maybe we can enjoy the college football season a little more.