window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

The Power of Cisco Magnets

If there is any doubt as to the quality of products from Cisco, one must only examine the magnets that are included with the Catalyst 2960 and 3560 8-port compact switches. The magnets are intended to be used to mount the switches to the underside of desks, or to filing cabinets, etc. However, the amazing power of the magnet became a point of curiosity for the engineers at TBL today, and a series of hypothesis and experiments were quickly devised to determine the full load capacity of the mounting devices. I am happy to report that a mere 5 magnets were able to suspend a 53 lbs. IronPort server vertically from the side of a cubicle overhead. Now that is the power of Cisco!

What I did over Summer Vacation

TBL recognizes that for K-12 Information Technology Teams, “summer vacation” is really the “summer busy season” when major projects that cannot be done during the academic year are stuffed into a 90 day window.

This summer is looking to be especially busy, especially for schools with PC labs. Microsoft is withdrawing support for Windows 2000 and XP SP2 on July 13th, 2010. Making matters worse, many PC lab workstations are not capable of running the software in the suggested upgrade to Windows 7.

Are you looking at a refresh of your PC lab this summer? If so…STOP! There is a better way and a better use of your scarce capital and human resources, both over the coming summer as well as in the coming academic years.

TBL is currently working with schools who are going to break the cycle of the PC Refresh and virtualize their lab infrastructure. We are going to make sure that this next “refresh” of the PC lab, creates a better learning tool for students, a less administration intensive work environment for I/T staff, and extends the capital investment for your school well beyond that of normal I/T assets.

Through Virtualization, TBL Networks can help you:

  • Extend the life of existing or new PC Lab Assets well beyond a traditional 5 year cycle
  • Eliminate the stress and non-productive work associated with “Patch Tuesday”
  • Create a more consistent, better performing learning environment for students
  • Drive costs out of PC Lab Operations

Before you spend another nickel on your PC Lab, TBL Networks can perform an assessment for you to size a virtualized PC Lab environment that will bring your PC Labs under centralized control, unifying the student experience.
Please feel free to Contact TBL Networks to schedule a free consultation to discuss how virtualizing your PC Labs can make for a great “summer vacation”.

David Rayner – 804-822-3645 drayner@tblnetworks.com

Stop the Madness

“Stop the madness” has been a marketing moniker for several campaigns over the years, most notably in the 1980s during the Reagan administration as a slogan to drive an anti-drug message to kids. It has also been used to peddle diet strategies among other less altruistic endeavors. It seems to me that this phrase may be applicable to the I/T business as well. Maybe we aren’t stopping the madness, but maybe we can “break the cycle” instead; a cycle which can certainly be expensive and maybe even maddening if repeated enough times.

Technology refreshes have been a part of the technology business since its inception. Smaller, cheaper, faster systems have replaced existing ones as Moore’s Law has worked its magic over the past 40 years. However, a refresh is not a refresh. That is, all refreshes are not created equal. Some refreshes are really upgrades to core systems. Existing systems have run out of capacity, response times are suffering, more/better cycles are added as workloads grow in scale and complexity. Such a refresh is probably properly categorized as a “good” refresh. Business is growing, transactions are increasing, services are expanding, so more capacity is needed. A refresh that is not quite as good, nor seen as positively by those who control the purse strings are the desktop refreshes and some WINTEL server refreshes. Typically, these refreshes are driven by the hardware assets’ incompatibility with new software. Rarely is the asset out of processing capacity, in fact as we all know, non virtualized servers and desktops rarely run at more than a fraction of their processing capability. Yet, the refreshes of these systems march dutifully on year after year in our industry.

Think about the desktop or server refresh process and more importantly, the results to the business. Assume I have 500 desktops in my organization that cannot run Windows 7. Further assume that as an organization I would like to be able to run the newest version of Windows in my organization. Just using some internet pricing for a midrange desktop and no peripherals, I will spend roughly $ 599 per desktop or $ 299,500 for new hardware. Adding in new software licensing costs, labor costs to build, install, and deploy each new machine and my refresh costs are pushing $ 500,000 and possibly beyond.

Consider the real world activity and realities behind this process. I have twisted my CFO’s arm for $ 300-500K to fund the products needed to refresh my desktops, I have deployed my scarce I/T resources to build and install 500 new desktops. At the end of the day what is different about my business and the experience of my end users? Well, the organization is out of pocket a lot of money….and well….my users have new desktops that can run the latest software versions….at least until the time comes when they cannot.

At this point I would ask you how long your organization has been in the desktop technology business. 20 years? So, if your firm is on a 3 year refresh cycle you have gone through this process 6 times, and you are getting ready for a 7th. If you are on a 5 year refresh cycle you have gone through this process 4 times….the process being – beg for funding, buy new gear, build and install new systems, have it run at a fraction of its processing capacity for the duration of its useful life, reach a point where you want to provide new hardware, go to step #1. Wash, rinse, repeat. Remember the reference phrase in the introductory paragraph?

The good news is, through the miracle of technology advancements, there is a better way. Desktop virtualization is not new, but it has matured to the point where it is a viable, and I would submit, an optimal strategy for managing your technology refresh cycles. Management tools, bandwidth optimizing protocols, and very strong, inexpensive desktop technology makes virtualizing the desktop well worth a look when you are facing your next desktop refresh. Want to run Windows 7? Great, install it on your server in your data center and push it to your virtual desktops. The reality of the virtual desktop I like the most is that in the virtual world you create a “reusable unit” in the virtual machine or desktop that lives on regardless of the underlying hardware. What I further like is that I can either keep older “out of date” desktops in place until they roll over and expire or place very inexpensive thin devices for my users that have a useful life well beyond a new desktop because the thin workstations cannot, nor will they ever run Windows 7, Windows 8, or Windows 14. They are presentation devices driven by the virtualized assets running in your data center…where they belong. Have you ever had an end user PC crash and find out that their last successful back up was sometime last year? Let me rephrase that, have you ever had an end user PC crash when the last successful backup was not sometime last year? What is the value of having all of your desktop images served, managed, and backed up by your professionals in the data center…from the data center. When was the last time a virus infected a desktop and was suddenly infecting every machine on your network and crunching the CEO’s laptop as he prepared for the quarterly earnings call with Wall Street? More great news here, when a virtual machine gets infected and goes haywire, guess what you do? You kill it and simply restart it…it cannot spread its infection to the rest of our users, saving them time and you lots of headaches and remediation cycles. I know that no one talks about infections that raged through their enterprise, but when you have some “alone time” think about how nice it would have been to deal with one infection on one virtual machine instead of remediating 500 desktops, plus all the laptops that got infected as well.

Virtualized infrastructure is not a panacea. Taken in a vacuum the cost justification may not meet your hurdle rate when compared to doing “the refresh” one more time. However, when I think about the number of times I have had to do a refresh, and the ancillary costs associated with this process as well as the future costs of having to wash, rinse, repeat all over again in 36 months, virtualizing these assets makes more and more sense.

I would suggest trying this on for size. When your next PC refresh (or server refresh) is on the horizon, when you go ask for the money to fund the project, before the CFO says something to the effect of “didn’t we just buy every one new PC’s? Why are we spending $ 500k again?” submit to him/her that your are asking to fund a refresh one more time, but this time you are going to break the traditional refresh cycle. This time you are going to deploy desktop assets that don’t care about Windows, that can be depreciated over a longer period of time. You are going to create a “desktop” that leverages the investments in your data center. Your new desktops are not going to infect their neighbors. Your new desktops have a virtual image that lives beyond the asset life of the underlying hardware. You are going to stop the madness of the refresh cycle. My bet is that you will get your project funded, and might even get a “nice job” as you walk out the door with your approval. Now THAT would be madness☺

It’s the Economy

The full phrase, penned by Democratic strategist James Carville, “It’s the economy, stupid” helped then candidate Bill Clinton shift the focus of the US electorate away from an immensely popular sitting president and his success in foreign affairs to the quandary of ordinary Americans as the US economy began to slip into recession. Fast forward almost 20 years, narrow the focus to the world of business and the Information Technology that supports it and the cry is still relevant…it’s the economy. However in 2010 the direction of the economy is far less clear than it was in 1991, and it is this lack of clarity that makes our jobs in I/T exponentially more difficult. The good news is that while the challenge is the most difficult I have seen in 25 years in the I/T industry, there are new and innovative solutions that can help business leaders prepare to prosper in such uncertain times.

If you follow the business/political press, on consecutive days over the past week you would have read the following headlines: “Economic Growth Could Get Scary Good” followed by “The Mighty US in Five Stages of Decline”. You also would have seen articles about the US economy turning the corner as the recession ebbs away, transformed into a growth economy again and on the same day read about the $ 500B pension time bomb ready to dismantle the entire California economy.

If I am a CIO, trying to plan for the future of my business, what I am preparing for…”scary good” or “stages of decline”? I have followed economics and finance closely throughout my business career. I have never seen more well supported, rational, yet totally divergent expert opinions predicting the future of the US economy. Nobel economists argue equally convincingly that the economy is ready to rocket upward into a period of sustained growth or it is at the precipice and ready to fall off a cliff into deep and prolonged recession.

What does this mean for those trying to plan and implement I/T strategies for business? It means you have to do the seemingly impossible. You have to simultaneously prepare for lean times where efficiency is dominant objective and driving cost out of the business is paramount. At the same time, just in case those pesky bulls appear on the horizon, while you are driving the last nickel out of your operations, you also need to be ready to quickly stand up new capacity, new applications, and new functionality to meet potential rising demand.

So, I am a CIO mapping out the plans to support my business. Do I have to pick one of the prognostications and hope I am right? Is that really my best play? If this were the situation 10 or even 5 years ago, the answer would have been “yes”. Place your bet on the bull or the bear and hope you are right. The good news is that if you are right you will be the hero of the company. The other good news is if you are wrong, you probably get to move on to a new assignment, maybe at a new firm.

When you come to a fork in the road take it. Yogi Berra’s advice still rings true today. When you come to the fork in the road to build your infrastructure for massive efficiency or massive scale…do it. The great news for CIO’s and business leaders today is that you can. My advice for business leaders is to “go cloud” and “go virtual”. Specifically, determine what applications and data should remain resident in the organization and virtualize the infrastructure needed to support those functions. Everything else, put in “the cloud”.

Every business will be different in their mix of what belongs “in the cloud” and what should remain under the care of the local staff. However, the decision is not whether to execute on both of these strategies, but rather how much do I virtualize on premise in my private cloud and how much do I put in the “public cloud.” The downside to not deploying both of these strategies simultaneously will expose your business to the “pick” of a bullish or bearish economy, when in reality no one knows which will come to pass, but the direction could be extreme, hence your supporting infrastructure must be ready to support the extremes. Extremes have been anathema to I/T throughout our history and we are being asked to prepare for polar opposite extremes at the same time. In the year in which James Carville declared “It’s the economy, stupid”, I/T and business leaders faced with this quandary would have been relegated to making their best guess and hoping for the best. In 2010, while it is still the economy and it is the economy that is presenting us with this seemingly daunting challenge, technology solutions have evolved to the point where businesses can prepare for and thrive in the environment of unexpected extremes, but they can’t do it with last year’s technology solutions and they can’t do it after the tidal wave is formed on the horizon. Business and I/T leaders need to leverage the options before them now, before the economy breaks one way or the other, before extremes mandate options that are in reaction to, not anticipation of economic events.

The connection between economic events and the challenges in the I/T community has never been more evident nor more compelling. While I am certain that James Carville was purely focused on getting his guy into office, I don’t think a more sustained and prophetic phrase has been coined since. It is the economy. The sooner we in I/T recognize and leverage that, the better off we and the organizations we support will be. Let’s get to work…and keep your eye on the horizon, I am not sure if I see a bull or a bear…but one of the two is coming.

When was the last time you thought VMotion was a bad idea?

I read a blog post over at Windows IT Pro recently that prompted this post. Mr. Greg Shields asks the question “When was the last time you VMotion-ed?”

To answer your question Mr. Shields, I did it just this morning and once this afternoon. Nobody even noticed.

Mr. Shields goes on to quote one of his clients:

Well, we thought about that. But we found that we really don’t use VMotion pretty much ever. We know that we can use VMotion, and sometimes we do. But, our performance is acceptable so we don’t need DRM, and we find that that we’re really never doing activities that require us to relocate or re-balance our virtual machines.

With a followup to this:

That response really got me thinking about the uses of VMotion in contrast with all the publicity its feature gets in the IT press:

  • You use VMotion prior to rebooting a host.
  • You use VMotion to re-balance load (often through DRM’s automated processes)
  • You use VMotion’s HA after an unexpected failure.

Since Mr. Shields didn’t mention a product that does not have a feature like Distributed Resource Scheduler (hence, the argument against it), I won’t either. However, considering the source, I can tell you that it is probably a Redmond-based product.

So, here is my response to Mr. Shields, since he asked:

Just a few small points to clarify for those new to VMware that may be reading this.

First, VMotion and HA are two different things. You don’t use “VMotion’s HA”.

HA or High Availability is when you have a host failure (for unplanned downtime).

VMotion is a live migration (for planned downtime such as maintenance). I find that those new to VMware get these two confused all the time.

DRS (Distributed Resource Scheduler), not DRM (Digital Rights Management?) is a feature that VMware uses to balance the load across the cluster of available host resources utilizing the VMotion functionality.

Having cleared up those few items, here is my perspective.

My clients like being able to use VMotion to do maintenance during normal business hours on their hosts. I see it all the time.

My clients also enjoy not worrying about performance at the host level (like you would do in the physical world) by utilizing DRS. As long as they have enough resources in the cluster and their virtual infrastructure architecture is properly designed, they don’t have to worry about manually placing or load balancing their VM’s on certain hosts.

In summary, I’m all for VMotion and DRS. You can leave DRM in iTunes. I buy all my music DRM free from Amazon anyway .