window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

Cisco Nexus 1000v and vSphere HA Slot Sizes

When implementing the Cisco Nexus 1000v, High Availability (HA) Slot Sizes can be affected on your vSphere Cluster. HA slot sizes are used to calculate failover capacity for HA when the “Host Failures” HA setting is used. By default the slot size for CPU is the highest reservation of CPU among all VM’s in the cluster (or 256 MHz if no per VM reservations exist). The slot size for Memory is the highest reservation of Memory among all VM’s in the cluster (or 0 MB + Memory Overhead if no per VM reservations exist). If you want to really dive further into HA and slot size calculations, I would highly recommend reading Duncan Epping’s HA Deepdive at Yellow-Bricks.com.

image

Each Cisco Nexus 1000v Virtual Supervisor Module (VSM) has a reservation of 1500 MHz on the CPU and 2048 MB on the Memory so that these resources can be guaranteed for the VSM’s. So, the slot size will be affected accordingly (1500 MHz for CPU and 2048 MB for Memory).

image

If you do not want your slot size affected and you do not want to enable the advanced slot size parameters das.slotCpuInMHz or das.slotMemInMB there is another solution that will allow you to still guarantee resources for the Nexus 1000v VSM’s and maintain smaller slot sizes.

Create a separate resource pool for each VSM with a CPU reservation of 1500 MHz and Memory reservation of 2048 MB.

image

 

image

Since slot sizes are only affected by per VM reservations, the resource pools will give you the desired results without increasing your HA slot sizes. I wouldn’t recommend doing this with too many VM’s as this turn into a management nightmare very quickly. However, for the two VSM’s in a Cisco Nexus 1000v infrastructure it is manageable.

Now your Nexus 1000v VSM’s are happy and you are not wasting resources in your HA cluster by calculating larger slot sizes than are required.

 

References

http://www.yellow-bricks.com/vmware-high-availability-deepdiv/

http://frankdenneman.nl/2010/02/resource-pools-and-avoiding-ha-slot-sizing/

Backing up the vCenter 4.x AD LDS Instance

vCenter is one of the most important components of your vSphere 4.x virtual infrastructure. Many advanced capabilities of vSphere 4 (vMotion, DRS, etc.) are not available without vCenter. Prior to vSphere 4.x, it was sufficient to backup the vCenter database and restore vCenter by building a new vCenter server, restoring the database, and reinstalling vCenter to attach to the restored database.

With the introduction of vSphere 4.x, vCenter 4.x started using Active Directory Application Mode (ADAM) on Windows Server 2003 and Active Directory Lightweight Directory Services (AD LDS) on Windows Server 2008 to accommodate Linked Mode for vCenter. The roles and permissions are stored in the ADAM or AD LDS database. In order to restore the roles and permissions, the ADAM or AD LDS database must be backed up.

VMware KB1023985 tells you that you need to back up the SSL certificates, vCenter Database, and the ADAM/AD LDS database. There are many well-known ways to back up a SQL database. However, backing up an AD LDS instance is a lesser known procedure. The following PowerShell script will back up the the AD LDS VMware Instance on Server 2008 and the SSL folder. As always, test it thoroughly before using it.
[powershell]#
# Name: VC_ADAM_SSL_Backup.ps1
# Author: Harley Stagner
# Version: 1.0
# Date: 08/17/2010
# Comment: PowerShell script to backup AD LDS
#          and SSL folder for vCenter
#
# Thanks to Tony Murray for the AD LDS portion of the
# script.
#
#
#########################################################

# Declare variables
$BackupDir = “C:backupVMwareBackup”
$SSLDir = $env:AllUsersProfile + “VMwareVMware VirtualCenterSSL”
$IFMName = “adamntds.dit”
$cmd = $env:SystemRoot + “system32dsdbutil.exe”
$flags = “`”activate instance VMwareVCMSDS`” ifm `”create full C:backupVMwareBackup`” quit quit”
$date = Get-Date -f “yyyymmdd”
$backupfile = $date + “_adamntds.bak”
$DumpIFM = “{0} {1}” -f $cmd,$Flags
$ServiceVCWeb = “vctomcat”
$ServiceVCServer = “vpxd”

# Main
Stop-Service $ServiceVCWeb
Stop-Service $ServiceVCServer -force
# Create the folder if it doesn’t exist
if(Test-Path -path $BackupDir)
{Write-Host “The folder” $BackupDir “already exists”}
else
{New-Item $BackupDir -type directory}
# Clear the IFM folder (Dsdbutil needs folder to be empty before writing to it)
Remove-Item $BackupDir* -recurse

# Run Dsdbutil.exe to create the IFM dump file
Invoke-Expression $DumpIFM
# Rename the dump file to give the backup a unique name

Rename-Item $BackupDir””$IFMName -newname $backupfile
Copy-Item $SSLDir $BackupDir -recurse
Start-Service $ServiceVCWeb
Start-Service $ServiceVCServer

# End Main[/powershell]
This script utilizes the dsdbutil.exe utility on Windows Server 2008 to backup the AD LDS instance and SSL folder after it stops the vCenter services. By default it backs these items to “C:backupVMwareBackup”. Change it to your liking.

Now to restore the AD LDS instance data, follow the directions at Technet.

References

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1023985

http://technet.microsoft.com/en-us/library/cc725665%28WS.10%29.aspx

http://www.open-a-socket.com/index.php/category/ad-lds-adam/

Have to Versus Want to

In both professional and personal life, there are things we like to do and there are things we have to do. We have to pay taxes. We have to pay rent for our office space. We have to keep accurate records of our business transactions. We like to engage with our clients on innovative projects. We like the the work flexibility of remote access technology. We like going to a college football game on a crisp autumn afternoon. Based on my conversations with clients over the past few years, one of the things that is definitely falling in the “have to do it” versus the “like to do it” category is running corporate email systems.

Email has evolved to one of the, if not the most important business applications for many organizations. As email has evolved from simple communications that was a nice complement to the phone system, its complexity, its impact, its support requirements, and therefore its costs have grown exponentially. The “wow factor” for using email is gone. The “fun factor” of supporting email is definitely gone.

The good news in this development is that there is no reason to deploy, manage, or run email systems in the cumbersome and expensive ways of the past. Email systems can be deployed in one of three ways: 1) they can deployed in a traditional architecture with on-premise physical assets dedicated to email and managed by local staff. 2) Email systems can be virtualized either on site or remotely and run as file systems managed on a shared virtual infrastructure. 3) Email can be deployed in the cloud, as a service with one of several providers.

TBL is a big fan of 2 of these 3 options. Frankly, we hate to see any of our clients investing in a traditional email architecture. We believe that the leverage from either a virtualized email infrastructure or email-as-a-service from a provider like Cisco Mail is a far better use of scarce capital and resources. Without going into a long dissertation, virtualized email can either be a snap-on to existing virtual operations or can be an initial virtualization project that can be leveraged across the enterprise for exceptional returns to the business. Outsourcing email to a provider like Cisco Mail can provide users with the Outlook experience users know and love, but exports all of the capital investments and support activity to a “cloud” provider. Remember how much fun it was the last time your Exchange environment crashed? Remember the pleasant conversations with the end user community while your entire team dropped what they were doing and scurried to get email back up and running? In a Cisco Mail scenario you get to export all of the pandemonium of an email crash to your provider. In a virtualized email scenario, your email system is a file, like all of your other virtualized systems, and can be restarted off a boot from SAN to any available server resource. Both of these options sound a lot better to me than an entire I/T staff running around with their hair on fire scrambling to get email back up and running.

Since email is no longer something we like to do, but is unquestionably something we have to do, why not optimize how we deliver email services through the miracles of either virtual infrastructure or cloud services? Then maybe we can enjoy the college football season a little more.

Hype Meets Reality

It is nice to see the transformation of hype into reality. As someone who has spent a lot of years in technology sales and marketing, hype sometimes has a negative connotation in our industry. There have been times when hype has gotten ahead of technology’s ability to deliver and times when hype has led to misplaced expectations for what new technology can in fact deliver. However technology hype (I think enthusiasm might be a better term) is not a bad thing. It creates excitement for and focuses attention on new innovations that have potential to improve business operations and service delivery. Hype is always going to be ahead of reality… it is important that far more often than not, reality needs to rise to meet the expectations of  hype (enthusiasm)

The hype (or enthusiasm) for cloud computing does not need any more gas dumped on the raging fire. I think cloud computing instead needs to see the foundations of business benefit beginning to take shape in the form of successful projects and realized business benefits. I am very pleased to report that we are starting to see the reality of cloud computing supporting the sky-high enthusiasm and expectations.

One TBL client in particular has, in essentially 120 days, transformed their entire operations from a quintessential 1990’s operation into a fully operational private cloud, and the business benefits are starting to flow. What has changed and improved? Lots. A better question might be what hasn’t? Tactical/transactional costs to “add more of the same” to the prior infrastructure  that were very high are now gone. Growth by acquisition was stymied by an infrastructure that was inflexible and difficult at best to scale. In fact growth of any kind was expensive to support and even more expensive if not impossible to maintain with any degree of integrity. Now, the private cloud is a facilitator of cost-effective growth and is a point of leverage for future acquisitions. Business recovery services that were simply an auditor’s check-mark in the past will be fully dynamic and deployable in the coming 60 days. Cost effective growth is now standard operating procedure. Business differentiating agility is now baked into the business through the private cloud. This company can now turn on a dime, reacting to new needs in the market and deploying new services in record time.

None of this happened by accident. New leadership and staff has shown remarkable foresight in developing a winning strategy combined with superb ability to execute. TBL engineers have helped architect and deliver replicable best practices in deploying a private cloud. And not to be overlooked, the technology that delivers the private cloud has lived up to every iota of its hype, its potential, and its expectations. Cloud is certainly the biggest hype of the past several years. It is gratifying and exciting to see hype morph into reality for our clients and see business benefits emerge as promised.

Death toll for Cisco Unity?

With the upcoming release of Unity Connection 8.5 from Cisco, a number of questions are brought to light. A single feature in this release, in my opinion, marks the end of our need for Unity. It is with a heavy heart and many fond memories that I’ll watch her sail into the sunset.

This feature is coined ‘Single Inbox’ though we all know it as a unified inbox. Prior to release 8.5, the only way to manage voicemails stored on a Connection server was by use of an IMAP connection. (a finicky IMAP connection I might add seeing as it wouldn’t work from mobile phones and really any device or application outside of Outlook)

Using WebDav for Exchange 2003 or EWS for Exchange 2007 or 2010, a user can now see and manage voicemails from their exchange mailbox exactly like Unity UM provides.

Now the key here is that Unity had only provided this functionality at some higher costs and risks. Not only did Unity require a schema update to install, it has been plagued by permissions and cohabitation issues from the beginning. Now, to be clear, none of these issues are product or development problems. Actually it couldn’t be further from that. Jeff Lindborg and his team have consistently developed some of the best well crafted products and management tools I’ve ever seen.

In fact, nearly all issues are self inflicted. Think about it, with Unity UM, the active directory, exchange, windows server, or the security teams all have the ability to ‘affect’ Unity by way of it’s many dependancies.

With Unity Connection’s new release it now has caught up with all Unity features and is provided through an hardened appliance which removes nearly all of the dependencies mentioned above.

…and the migration ain’t that bad either. Cisco deserves an atta’ boy on this one.

Active Directory Authentication – Accountability in ESX / ESXi 4.1

As a part of TBL professional services datacenter practice, I perform many health and security checks on virtual infrastructures for clients. One of the common issues that I run into is the use of the default “root” account for administering ESX servers. This is an issue for two reasons:

  • The “root” account has a tremendous amount of power and the password for it is typically the same shared password on each ESX host.
  • If all administration is done with the “root” account there is no audit trail for accountability. It could have been Joe, Bob, or Sue that logged into the ESX host. You just don’t know.

Of course, most administration should be done through vCenter, but you still occasionally need to log into an ESX host directly. The solution to this that I have recommended in the past has been to create local user accounts coinciding with the Active Directory user name on each ESX host. Then do not use root unless absolutely necessary when performing administrative tasks directly on the host. However, this meant that the IT Administrators would need to manage user accounts in Active Directory and the local accounts on the ESX / ESXi hosts.

There has been a “less than ideal” solution to Active Directory authentication for quite a while (see Scott Lowe’s article). However, this solution was very laborious, involves the command line, and only worked on ESX Classic. Not ESXi.

With the release of vSphere 4.1, native Active Directory authentication is one of the many new features. Here’s how easy it is to implement once you have ESX installed.

  1. Connect to your ESX/ESXi server with the vSphere Client.
  2. Click on “Inventory” and highlight your ESX/ESXi server.
  3. Click on the “Configuration” tab.
  4. Navigate to “Software –> Authentication Services”
  5. Click on “Properties” on the right hand side.
  6. Change the “Directory Service Type” from “Local Authentication” to “Active Directory”
  7. Once you do that and enter in your Domain, click “Join Domain” and you will be prompted for appropriate credentials to join the domain.
  8. Click “OK” when you are done.

 

b32a89fc3fe59da7f97e0d5161682bd2

 

That’s it! Now you can have accountability controlled through Active Directory Authentication. Joe, Bob, and Sue can all use their respective Active Directory accounts for authentication. Accountability!

 

b7526b6a8ee694da6aecb6faa7a34f69

 

Permissions can now be added for Active Directory users and groups as well.

 

2ad89b1a177a280224ead207b4a7dafc

You can even use it with the vSphere CLI and the Direct Console User Interface (DCUI) on ESXi.

499e10ac868f1254e7260d133ef58c7a

Should you still need the local “root” account for emergencies, it will still be available to you. Otherwise, do your company a favor and maintain an audit trail for administrative actions on your infrastructure.

Here’s a New Term…

I try to read a lot of blogs, books, and articles that provide insights into what is going on in the market and how we in the technology business can help our clients best adjust to evolving economic, consumer, and technology changes. I’d love to claim credit for this new term and am a bit frustrated that I didn’t think of this before given some of the blog entries I have posted…

People speak and write of “the new normal”  when describing the horizon for our business and economic future. I read an article this week that described instead “The New Abnormal”.  The net of the article was that there really is no “normal” but that there is a new abnormal in consumer behavior that is odd (abnormal) given all we know about the state of our tepid economic recovery and the worries about profligate government spending. The new abnormal is characterized by families taking expensive vacations to exotic destinations and then worrying about saving money by eating out at McDonalds or shopping at Target…while at an exotic port of call. Further evidence of the new abnormal can be seen in the sales results of high end car companies like Mercedes and BMW who are reporting excellent sales volumes, while dollar stores are simultaneously thriving. I think on the way home I may swing by a Dollar Tree store to see how many new Mercedes and BMW’s are parked out front.

I have not sorted out the direct correlation of the New Abnormal for our business and for our clients yet, but it seems to me on the surface to be another pillar supporting the case for massive business agility. I hesitate to make yet another connection to cloud computing in all of this, as everything up to and including the rodents eating in my garden seem to be drivers for cloud computing. However, I think this underscores what we have discussed with our clients that the unpredictability of the future of markets and the economy has never been higher. Ironies in consumer behavior may not be a basis for an I/T infrastructure decision, however the schizophrenic nature of the economic cycle that drives such consumer behavior may be. We stand by our recommendations to our clients – optimize what you know, plan for what you anticipate, and prepare to be wrong. Therefore, make investments that are fluid, that can expand and contract with your infrastructure needs. The great news in all of this is that technology exists today to help you prepare for any economic outcome…and by any historical measure, that is indeed abnormal.