window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

Blog

Stop the Madness

By admin | Jun 30, 2010 | Insights

“Stop the madness” has been a marketing moniker for several campaigns over the years, most notably in the 1980s during the Reagan administration as a slogan to drive an anti-drug message to kids. It has also been used to peddle diet strategies among other less altruistic endeavors. It seems to me that this phrase may be applicable to the I/T business as well. Maybe we aren’t stopping the madness, but maybe we can “break the cycle” instead; a cycle which can certainly be expensive and maybe even maddening if repeated enough times.

Technology refreshes have been a part of the technology business since its inception. Smaller, cheaper, faster systems have replaced existing ones as Moore’s Law has worked its magic over the past 40 years. However, a refresh is not a refresh. That is, all refreshes are not created equal. Some refreshes are really upgrades to core systems. Existing systems have run out of capacity, response times are suffering, more/better cycles are added as workloads grow in scale and complexity. Such a refresh is probably properly categorized as a “good” refresh. Business is growing, transactions are increasing, services are expanding, so more capacity is needed. A refresh that is not quite as good, nor seen as positively by those who control the purse strings are the desktop refreshes and some WINTEL server refreshes. Typically, these refreshes are driven by the hardware assets’ incompatibility with new software. Rarely is the asset out of processing capacity, in fact as we all know, non virtualized servers and desktops rarely run at more than a fraction of their processing capability. Yet, the refreshes of these systems march dutifully on year after year in our industry.

Think about the desktop or server refresh process and more importantly, the results to the business. Assume I have 500 desktops in my organization that cannot run Windows 7. Further assume that as an organization I would like to be able to run the newest version of Windows in my organization. Just using some internet pricing for a midrange desktop and no peripherals, I will spend roughly $ 599 per desktop or $ 299,500 for new hardware. Adding in new software licensing costs, labor costs to build, install, and deploy each new machine and my refresh costs are pushing $ 500,000 and possibly beyond.

Consider the real world activity and realities behind this process. I have twisted my CFO’s arm for $ 300-500K to fund the products needed to refresh my desktops, I have deployed my scarce I/T resources to build and install 500 new desktops. At the end of the day what is different about my business and the experience of my end users? Well, the organization is out of pocket a lot of money….and well….my users have new desktops that can run the latest software versions….at least until the time comes when they cannot.

At this point I would ask you how long your organization has been in the desktop technology business. 20 years? So, if your firm is on a 3 year refresh cycle you have gone through this process 6 times, and you are getting ready for a 7th. If you are on a 5 year refresh cycle you have gone through this process 4 times….the process being – beg for funding, buy new gear, build and install new systems, have it run at a fraction of its processing capacity for the duration of its useful life, reach a point where you want to provide new hardware, go to step #1. Wash, rinse, repeat. Remember the reference phrase in the introductory paragraph?

The good news is, through the miracle of technology advancements, there is a better way. Desktop virtualization is not new, but it has matured to the point where it is a viable, and I would submit, an optimal strategy for managing your technology refresh cycles. Management tools, bandwidth optimizing protocols, and very strong, inexpensive desktop technology makes virtualizing the desktop well worth a look when you are facing your next desktop refresh. Want to run Windows 7? Great, install it on your server in your data center and push it to your virtual desktops. The reality of the virtual desktop I like the most is that in the virtual world you create a “reusable unit” in the virtual machine or desktop that lives on regardless of the underlying hardware. What I further like is that I can either keep older “out of date” desktops in place until they roll over and expire or place very inexpensive thin devices for my users that have a useful life well beyond a new desktop because the thin workstations cannot, nor will they ever run Windows 7, Windows 8, or Windows 14. They are presentation devices driven by the virtualized assets running in your data center…where they belong. Have you ever had an end user PC crash and find out that their last successful back up was sometime last year? Let me rephrase that, have you ever had an end user PC crash when the last successful backup was not sometime last year? What is the value of having all of your desktop images served, managed, and backed up by your professionals in the data center…from the data center. When was the last time a virus infected a desktop and was suddenly infecting every machine on your network and crunching the CEO’s laptop as he prepared for the quarterly earnings call with Wall Street? More great news here, when a virtual machine gets infected and goes haywire, guess what you do? You kill it and simply restart it…it cannot spread its infection to the rest of our users, saving them time and you lots of headaches and remediation cycles. I know that no one talks about infections that raged through their enterprise, but when you have some “alone time” think about how nice it would have been to deal with one infection on one virtual machine instead of remediating 500 desktops, plus all the laptops that got infected as well.

Virtualized infrastructure is not a panacea. Taken in a vacuum the cost justification may not meet your hurdle rate when compared to doing “the refresh” one more time. However, when I think about the number of times I have had to do a refresh, and the ancillary costs associated with this process as well as the future costs of having to wash, rinse, repeat all over again in 36 months, virtualizing these assets makes more and more sense.

I would suggest trying this on for size. When your next PC refresh (or server refresh) is on the horizon, when you go ask for the money to fund the project, before the CFO says something to the effect of “didn’t we just buy every one new PC’s? Why are we spending $ 500k again?” submit to him/her that your are asking to fund a refresh one more time, but this time you are going to break the traditional refresh cycle. This time you are going to deploy desktop assets that don’t care about Windows, that can be depreciated over a longer period of time. You are going to create a “desktop” that leverages the investments in your data center. Your new desktops are not going to infect their neighbors. Your new desktops have a virtual image that lives beyond the asset life of the underlying hardware. You are going to stop the madness of the refresh cycle. My bet is that you will get your project funded, and might even get a “nice job” as you walk out the door with your approval. Now THAT would be madness☺

Leave a Reply

Your email address will not be published. Required fields are marked *