window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

A Structured Virtual Infrastructure Part II: Compute Platform Hardware

In Part I of this series, I discussed some design options for a virtual infrastructure (Traditional Rackmount, Converged Rackmount, and Converged Blade). Using the Converged Blade option as the model going forward, we’ll explore the individual components of this solution in more detail. This post will explore the Compute Platform (UCS B-Series) in more detail.

 

image

Let’s start with the “brains” of the UCS B-Series, the 6100 series fabric interconnects.

6100 Series Fabric Interconnects:

Interconnect / Module Options:

  • 6120XP – (20) 10gbE and FCoE capable SFP+ port Fabric Interconnect with a single expansion module slot
  • 6140XP – (40) 10gbE and FCoE capable SFP+ port Fabric Interconnect with two expansion module slots

Expansion Module Options:

  • 10Gbps SFP+ – (6) ports
  • 10Gbps SFP+ – (4) ports, 1/2/4 Gbps Native Fibre Channel SFP+ – (4) ports
  • 1/2/4 Gbps Native Fibre Channel SFP+ – (8) ports
  • 2/4/8 Gbps Native Fibre Channel SFP+ – (6) ports

Below is a diagram of the UCS 6120XP labeled with the different ports:

image

A redundant UCS system consists of two 6100 series devices. They are connected via the cluster ports to act in unison. If a port or the whole 6100 series were to fail, the other would take over.

The 6100 series fabric interconnects provide 10Gb connectivity to the UCS 5108 via a module called the 2104XP fabric extender. A single pair of 6100 series fabric interconnects can manage up to twenty chassis depending on your bandwidth needs per chassis.

5108 Chassis (8 blade slots per chassis):

image

As you can see from the diagram you have two 2104XP Fabric Extenders per chassis. Each of the 2104XP’s have four 10Gb ports for a total of up to 80Gbps of throughput for the chassis. So, there is plenty of bandwidth and the added benefit of fewer cables and consequently, easier cable management. The only cables that will ever be needed for the back of the 5108 chassis are up to eight cables for connectivity and up to 4 cables for power.

Since the bandwidth needed for the external network is calculated at the Fabric Interconnect level, all that is needed at that point is to calculate the computing needs for the workload (CPU and RAM). This is where the blades themselves come in.

Blade Options:

  • Full-Width Blades (B250 M1, B440 M1) take up two chassis slots each
  • Half-Width Blades (B200 M2, B230 M1) take up one chassis slot each
  • You can have a combination of blades in the same chassis up to eight chassis slots
  • The blade processor configurations range.
  • The B2xx blades have either 4, 6, or 8 core processors in a dual socket configuration
  • The B440 M1 can hold up to (4) sockets of the Intel 7500 series 8 core processors
  • The B250 M1 holds up to 384GB of RAM in a full-width form factor
  • The B200 M2 holds up to 96GB of RAM in a half-width form factor
  • The B230 M1 holds up to 256GB of RAM in a half-width form factor
  • The full-width servers can hold up to (2) mezzanine cards for connectivity. Each card has (2) 10Gb ports for connectivity. That’s 20Gbps per card.

M81KR Virtual Interface Card:

The M81KR Virtual Interface Card deserves a special mention. This mezzanine card is capable of dividing the (2) 10Gbps ports into a combination of up to 56 virtual Ethernet and Fibre channel ports. This way you can manage port isolation and QoS for your blades like you may be used to in a traditional rackmount virtual infrastructure. As these posts continue, we will explore why this functionality may be needed for the virtual infrastructure when using the Converged Blade Infrastructure Model.

This post explored some of the Compute Platform Hardware components. The next post in this series will explore some of the software components and management that make the UCS compute platform ideal for a structured virtual infrastructure that can scale incrementally and be managed easily.

A Structured Virtual Infrastructure Part I: Physical Infrastructure

Server virtualization is infectious. It is a technology that tends to take off in record pace in IT organizations that have adopted it as part of their infrastructure. It has been my experience that organizations fall into one of two broad categories when it comes to their virtualization initiatives. They either look at server virtualization as a “Strategic Initiative” or they use server virtualization as a “Tactical Tool.” Let’s explore these categories and then I’ll discuss some infrastructure options for a structured virtual infrastructure.

Server Virtualization as a “Tactical Tool”

I have seen this in many organizations. The IT group needed to test a new application or needed to spin up a new server quickly. What’s the quickest way to spin up a new server? Server virtualization, of course. So, here is how I see many infrastructures get started:

  • IT department downloads the free vSphere Hypervisor
  • IT department proceeds to click next until the hypervisor is installed
  • IT department spins up a few virtual machines on the hypervisor
  • “Life is good. That was easy wasn’t it?”
  • “It’s so easy and cool that more demand creeps up for further virtual machines
  • Pretty soon the IT department wants to host production workloads on the hypervisor
  • “But wait? What about failover, live migration, etc. Don’t I need a SAN for that?”
  • How “much” storage do I need?
  • IT department calculates how much space they are using on their servers, or worse yet, how much disk space in total is available on all of their servers combined
  • “Wow! I need a lot of space to host all of those servers”
  • IT department buys large slow “shared disks” of some variety to satisfy the SAN requirement
  • IT department sets up vCenter on a spare server
  • IT department clicks next until a few hypervisors are installed and added to the new cluster complete with “shared storage”
  • Now there is some equipment and software in place to host virtual machines
  • IT department spins up new virtual machines until they are suddenly out of capacity or things are “just slow and error prone”
  • Virtualization stalls because there is no more capacity and there is a lack of trust in the virtual infrastructure as it stands
  • IT department starts purchasing physical servers again for “critical” applications
  • “Now DR must be provided for those “critical” applications. How can we protect them?”
  • “The easiest thing to do would be to leverage virtualization, but we’re out of capacity and the platform has been problematic”
  • “What do we need to do to leverage virtualization on a larger scale in our infrastructure?”

It’s a vicious cycle and it is why I continue to see companies only 20-40% virtualized. It is great that server virtualization technology has been embraced. However, without proper planning and a structured approach to building and maintaining the virtual infrastructure, many organizations will continue to be only 20-40% virtualized. They are leaving the many benefits of server virtualization and even money on the table if they stall.

So, this series of posts will explore the alternative of server virtualization as a “Strategic Initiative”. This is the approach that I take with my clients at TBL to either build a structured virtual infrastructure from the ground up or remediate a “tactical tool” virtual infrastructure to the point that it becomes an effective platform to host the organizations infrastructure moving forward.

Physical Infrastructure Choices

There are many options when it comes to virtual infrastructure hardware. Before any hardware choices are made, a capacity planning engagement should occur. Notice that capacity planning was not mentioned at all in the “Server Virtualization as a Tactical Tool” scenario. Look at this infrastructure as if it is going to host all of your physical servers even if you will not start there. How else does one determine if the infrastructure purchased for a new virtual infrastructure is sufficient if capacity planning is not performed? I can’t count the number of times that I have heard the equivalent of the below phrase:

  • “These host servers and storage should do since my physical servers don’t really do much.”

How do you know that your host servers don’t do much unless you have performed capacity planning? Is it a gut feeling? I have seen many gut feelings cause server virtualization stall. We need to examine the four “core” resources (CPU, RAM, DISK, and NETWORK) to determine not only our capacity but the level of performance needed. After a proper capacity planning engagement we can determine the “feeds and speeds” of our hardware. However, the hardware choice becomes about more than just raw specs in a structured virtual infrastructure. Let’s examine some options.

Traditional Rackmount Server Infrastructure

This is the standard virtual infrastructure that has been around for a while. With this approach, you take rackmount servers as hosts and provide shared storage via iSCSI, NFS, or Fibre Channel. A diagram of this approach can be seen below.

image

This infrastructure is well understood. However, the scalability is somewhat limited. Typically, a virtual infrastructure host will have eight to ten cables attached to it in a 1Gbe environment. This is due to the way that traffic should be separated in a virtual infrastructure. This is fine for a few hosts. As the infrastructure is scaled, the number of cables and ports required becomes problematic. I have seen environments where shortcuts were taken to provide enough ports by combining virtual infrastructure traffic types even though they should be separated. As more hosts are needed a better solution to scaling the infrastructure needs to be in place.

Converged Rackmount Server Infrastructure

This infrastructure consolidates the traditional 1GbE infrastructure into a 10GbE infrastructure by connecting to an FCoE or straight 10GbE switch. This allows more bandwidth and cuts down on the port count required as the infrastructure scales.

image

As this infrastructure is scaled, the number of cables and ports required is much more manageable. It must be noted that the cable infrastructure still scales linearly with the hosts. Port count can still be an issue in larger environments. Also, we really haven’t added anything new on the server management front in this design choice. Again, for smaller, relatively static environments this can work nicely. If the infrastructure needs to be able to scale quickly and efficiently, there are better options.

 

Converged Blade Infrastructure

Large scale ease of management, efficient scaling, and massive compute capacity can be achieved without the inherent cable / port count problems with a converged blade infrastructure. In the example below, a Cisco UCS B-Series converged blade infrastructure to achieve these benefits.

image

Let’s look at the components of this infrastructure model.

  • The UCS 6100 series would be similar to the FCoE switches in the Converged Rackmount Infrastructure. I say similar because it is ideal to still have upstream SAN and LAN switches. In this scenario the 6100 pair act like a host (or multiple hosts) attached to the Fibre Channel Fabric. They accomplish this with an upstream switch that is NPIV capable.
  • The blade chassis provide the backplane connectivity for your compute resources or blades. Each chassis can have up to (8) 10Gb FCoE ports for connectivity. The blades share that connectivity to the upstream 6100’s.
  • The 6100’s then take that FCoE traffic and split it into Fibre Channel to connect to the upstream SAN Fabric and Ethernet to connect to the upstream LAN Fabric.
  • Instead of calculating bandwidth / port counts at the server level as you would in a traditional rack mount scenario, you calculate bandwidth needs at the 6100 level.
  • Less cabling, more scalability, easier management, smaller footprint.

With the up front investment in the 6100’s in this architecture, the solution scales out with only incremental cost very nicely. Also, the 6100’s are the single point of management using the UCS Manager in this infrastructure. The UCS abstracts unique information that would identify a server into a service profile. The types of data in the service profile may include items such as:

  • Bios Settings
  • WWN
  • MAC Addresses
  • Boot Order
  • Firmware Revisions

This way, settings that would normally be configured after a server arrives can be pre-configured. When a new blade arrives you can simply slide the blade into the chassis, assign the service profile, boot it and it is ready to install an OS in minutes. If this OS is ESXi, then that only takes about 5 minutes to install as well.

With the Converged Blade Infrastructure we set up a foundation for ease of incremental scalability when the environment grows. Using this as the model infrastructure, the upcoming posts will examine the different components involved in more detail so that you can get a holistic view of the entire virtual infrastructure as a structure approach is taken to building this out.

June TBL Lunch & Learn – Windows 7 Migration and VDI

Join us for lunch to learn more about VDI and how it can help you migrate to Windows 7. TBL’s own VMware Certified Design Expert (VCDX) Harley Stagner will lead the discussion and allow you the chance to ask questions. 

PLUS, NO POWERPOINT PRESENTATION!

Dates/Locations:

Richmond, VA – June 9th – Hondo’s
Virginia Beach, VA – June 10th – Ruth’s Chris

Topics:

  • What is VDI?
  • The components of a virtual desktop.
  • How do we gain operational benefits from VDI?
  • What does Windows 7 have to do with it?
  • How VDI can help with pre-Windows 7 software.

Who should attend:

Anyone interested in virtual desktops and simplifiying their infrastructure. 

Save your spot! Register now!

Agenda:

11:30 AM – Registration
11:35 AM – Order Lunch
12:00 PM – Discussion with Harley Stagner
  1:30 PM – Event Close

About Harley:

Harley Stagner is the lead VMware Engineer at TBL Networks. He is the first VMware Certified Design Expert (VCDX) in Virginia and just the 46th person worldwide with that title. Harley is also the author of Pro Hyper-V, which was published in 2009.

About TBL Networks:

TBL Networks, 2010 Cisco Collaboration Partner of the Year and certified VMware Enterprise Solutions Provider partner, provides our customers a wide range of advanced technology solutions, with a focus on Unified Communications, Virtualization and Storage.

Ask Harley: Guitar Hero Edition

Harley Stagner knows virtualization.  He’s the first person in the Commonwealth of Virginia to receive the title of VMware Certified Design Expert (VCDX), and just the 46th person worldwide to earn this elite certification. From ESXi to VDI, public cloud to private cloud, virtual machines to virtual networks, Harley Stagner is the man when it comes to virtualization and VMware.

But what about Harley’s other talents?  Just because he is TBL Networks’ resident expert on VMware, it doesn’t mean that Harley’s skills end at the network server.

On May 5th, we plan to show one more of the many sides of Mr. Stagner with “Ask Harley: Guitar Hero Edition.”

On Thursday, May 5th, starting at 7PM , Harley will answer your questions about virtualization live on Twitter.   Following this presentation, Harley and his band, the 46ers, will play the song of your choice from Guitar Hero: World Tour – LIVE ON THE INTERNET.

To participate, you need to do just three things:

1)      SUBMIT YOUR VIRTULIZATION QUESTION

You can submit your questions three simple ways:

  1. Twitter  – Post your questions on Twitter and tag with #AskHarley.
  2. Email – Send your questions to twitter@tblnetworks.com.
  3. Facebook – Post your question on the Wall at www.facebook.com/tblnetworks

To ensure that Harley has adequate time to review your questions, please submit them by end of business on Wednesday, May 4th.

2)      VOTE ON A SONG

To vote on a song, go to www.facebook.com/tblnetworks and vote on the Poll.

3)      GO TO WWW.TWITTER.COM/TBLNETWORKS ON MAY 5th, AT 7PM EST

Once Harley has finished answering your questions on Virtualization, the 46ers will hit the virtual stage and rock your world.

If you want to know more about virtualization, ask Harley, and then prepare to be rocked.

End to end virtual network security with the Cisco Nexus VSG

So I’ve been spending a lot of time in our lab with the Cisco Nexus Virtual Security Gateway. I have come to the conclusion that it rocks! Finally, the virtual infrastructure is no longer treated as a second class citizen when it comes to securing network traffic between virtual machines. We are at a point now with the Cisco VSG that we can have robust Cisco infrastructure, including security, from the upstream physical network to the virtual network.

The Cisco Nexus VSG builds upon the Nexus 1000v distributed virtual switch and communicates with the Virtual Ethernet Modules in the Nexus 1000v to provide a very robust security policy engine that can perform granular filtering and matching on a number of parameters. For example:

  • Network (ip address, port number, etc.)
  • VM (VM Name, Installed OS Name, Cluster, Host, Zone)

Yep, that’s right, I said VM. Since the Cisco VSG integrates with the vSphere API’s and vCenter, you can filter on items like a virtual machine name or partial name, installed OS, cluster, etc. This is very powerful. I no longer have to rely on network and IP rules alone to filter traffic between virtual machines. This is a more intelligent approach to filtering that really highlights the synergies that Cisco and VMware have established. Best of all, once it is set up everything is managed from a single Cisco Virtual Network Management Center (VNMC) instance. This web-based management tool let’s you manage multiple Virtual Security Gateway instances. Let’s look at a simple example of how easy it is to perform traffic filtering in the virtual infrastructure with the Cisco VSG.

Topology and Components:

  • vSphere 4.1 Enterprise Plus Host Servers
  • Cisco VNMC VM
  • Cisco Nexus 1000v Infrastructure
  • Cisco VSG Infrastructure
  • tenanta-srv1 VM
  • tenanta-srv2 VM
  • tenantb-srv1 VM
  • tenantb-srv2 VM

The goal of this configuration is to allow the following communication flows:

  • tenanta-srv1 and tenanta-srv2 should communicate
  • tenantb-srv1 and tenantb-srv2 should communicate
  • The Tenant A servers(tenanta-srv1 and tenanta-srv2) should not be able to communicate with the Tenant B servers (tenantb-srv1 and tenantb-srv2)
  • Anyone else should be able to communicate with both the Tenant A and Tenant B servers
  • There is a further caveat that the Tenant A and Tenant B servers are both on the same subnet (don’t worry these servers belong to the same company Winking smile )

Below are the network settings:

  • tenanta-srv1 VM – 10.91.41.200
  • tenanta-srv2 VM – 10.91.41.201
  • tenantb-srv1 VM – 10.91.41.202
  • tenantb-srv2 VM – 10.91.41.203
  • a client with another ip address

Here are the general steps for setting up this scenario once the Cisco VSG infrastructure is in place:

  • Create a tenant
  • Assign the VSG to the tenant
  • Create a zone each for the Tenant A and Tenant B servers (these zones match VM’s with names that contain “tenanta” and “tenantb” respectively)
  • Create a firewall policy for the VSG
  • Create a policy set that includes the policy
  • Bind the policy set to the VSG
  • Bind the tenant to a port-profile so that any VM that is on that port-profile is filtered with the policy rules

Below are the screenshots of the results after the VSG was configured.

These are the only rules that are required for the communication flows.

image

 

Here is what the port-profile looks like on the Nexus 1000v. Notice the org and vn-service entries. This means that this port profile is VSG aware.

image

 

The ICMP traffic from the Tenant A Servers.

image

image

ScreenClip

ScreenClip(1)

ScreenClip(2)

ScreenClip(3)

 

The ICMP traffic from the Tenant B Servers (same result as the Tenant A servers. Only one is shown here.)

ScreenClip(4)

ScreenClip(5)

ScreenClip(6)

 

Finally the results from the external client

ScreenClip(7)

ScreenClip(8)

ScreenClip(9)

ScreenClip(10)

 

As you can see, we achieved our goal with just three filtering rules. Also, we were able to leverage VM name filtering instead of IP filtering which allowed us to filter on the same subnet without resorting to naming each IP address or different port numbers. Very cool! The Cisco VSG is capable of many complex configurations combining both networking categories (ip, port number, etc.) and VM categories. This was just a quick example of what can be done. As always, if you have any questions or would like to see a live demo feel free to contact me.

The iPad is VDI Ready!

This has been a very cool couple of weeks for the VDI landscape with VMware View. The View client for the iPad that was first seen in a demo at VMword US in 2010 is finally here. Now I know what may have taken them so long.

VMware View 4.6 was also released in the past couple of weeks. With version 4.6 came the ability to use the PCOIP protocol on the VMware View Security Server that sits in your DMZ. This eliminates the need to set up a VPN for the endpoint device to access a desktop pool using the PCOIP protocol from outside your firewall.

I can now see where this functionality would be absolutely necessary to access a View desktop from the iPad. Super-mobile VDI is really cool, but it would have been a drag to only access your desktops over RDP. Also, having to set up a VPN connection from your iPad would go against the ease of use that the iPad offers.

Below is a video demo of the new iPad client. Among some of the coolest features are the virtual laptop track pad and the touch gestures built into the client to take advantage of the iPad functionality.

http://www.youtube.com/watch?v=ldECHtfDyjs

And also some use cases in the field. This one is for Children’s Hospital Central California. I think this is a great use of the technology.

http://www.youtube.com/watch?v=aU0nF_FM–s&feature=related

If you have any questions or would like more information, please contact me. Also, if you would like to see the View Client for iPad in person, we can schedule a demo for you with our VMware View Lab running on our Cisco UCS blade infrastructure.

Sean Crookston Awarded VCAP-DCA

TBL Networks’ Solutions Engineer Sean Crookston recently attained  the title of VMware Certified Advanced Professional in Datacenter Administration (VCAP-DCA). Sean is only the 47th person worldwide to achieve this elite virtualization certification.

The VMware Certified Advanced Professional 4 – Datacenter Administration (VCAP4-DCA) is designed for Administrators, Consultants and Technical Support Engineers capable of working with large and more complex virtualized environments and can demonstrate technical leadership with VMware vSphere technologies.

Sean put together many hours of study and research to reach this achievement.  Sean has documented much of this work on his website – www.seancrookston.com.

Congratulations to Sean Crookston, VCAP-DCA #47.

Follow Sean Crookston on Twitter at www.twitter.com/seancrookston

Follow TBL Networks on Twitter at www.twitter.com/tblnetworks

Solving the Virtual Desktop Puzzle Part 3

In this series we’ve already looked at virtual desktop storage efficiency with “linked clones” and user profile management options. In this post we will discuss another piece of the desktop image that can potentially be offloaded to the network. The applications.

Remember that in a virtual desktop environment one of our goals is to make the “gold” master image as vanilla as possible. We do this by offloading unique components of the desktop off of the image and onto the network. VMware has a way to virtualize your applications so that they can be offloaded onto a network share. This means that the applications can be streamed to the user when they log in to their desktop. So, the desktop becomes disposable and the user gets the appropriate applications when they log into any virtual desktop. So how can we do this?

We do this with a VMware product called ThinApp. It even comes bundled with the VMware View Bundled licensing. ThinApp allows us to package an application as a single executable file. All of the DLL’s and bits that the application requires at runtime are packaged in this single executable file. So, nothing actually gets installed on the desktop in order to run the application. Once the application is packaged it can run from the desktop hard drive, an external hard drive, a cd, a dvd, and even from the network. Basically, if you have an operating system and a place to store the packaged ThinApp’ed application, you can run it.

If you run the packaged application from the network, then each user can have the application streamed to their virtual desktop instance when they log in. There is also the added benefit of the packaged applications running on the appropriate storage tier if we are running a tiered storage solution. So, we’ve taken care of the user profiles and applications to make the desktop image as vanilla as possible. Our user profiles and our applications can be centrally managed along with our desktops. We can now treat multiple desktops as a single pooled unit. No more Microsoft patch Tuesday woes, no more uncontrolled virus or spyware outbreaks, and fewer user desk side trips.

First Ask Harley Session Question and Answer Summary

On Thursday, January 13th I had my first Ask Harley Session. This was a session where I answered virtualization and VMware related questions on Twitter. I received a lot of great questions during this session. Thank you to all who participated. Below are the questions and their answers in case you missed them on Twitter.

 

Ask Harley: Question 1 – What common issues or mistakes do you see with your customers who have setup VMware infrastructure or are looking to setup VMware?

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

 

Question:

What common issues or mistakes do you see with your customers who have setup VMware infrastructure or are looking to setup VMware?

Answer:

Most of the issues in an initial deployment occur from a lack of capacity, application, and infrastructure planning.

Consider the 4 core (CPU, RAM, DISK, NET) resources from a capacity standpoint. Consider application requirements (MS Clustering, Dongles, Vendor Support, Etc.).

Consider scalability and ease of management from the infrastructure standpoint. Infrastructure item examples: Scale up vs scale out(more hosts = more DRS opportunities,Less hosts = more risk).

Details. Details. Details. Example- Do I have enough space for VMDK and Swap files? Do I have a syslog server for ESXi?

Keep it simple. Avoid Resource Pools, Reservations, and Limits unless they are needed.

Resource pools are NOT for organization. That’s worth repeating. Resource pools are NOT for organization. Folders are.

There is more involved in a virtualization design / deployment than clicking next.

 

Ask Harley: Question 2 – Why would you use Virtual Port-ID NLB instead of IP-Hash NLB?

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

 

Question:

Why would you use Virtual Port-ID NLB instead of IP-Hash NLB?

Answer:

The summary answer would be simplicity. Port-ID is the default load balancing and good in a wide range of use cases.

Port-ID Advantage: Simple, effective. Port-ID Disadvantage: Only egress traffic is load balanced as it depends on the source virtual port id

IP-Hash has an upstream dependency on 802.3ad static link aggregation. An example is etherchannel on Cisco Switches. Even if the dependency is met. You may not be load balancing as efficiently as you think. You need MANY destinations in order for IP-Hash maximum effectiveness.

Why? Because IP-Hash algorithm uses an Xor of source and destination IP using the least significant byte (LSB) of both addresses. Then, the modulo of the Xor result is computed over the number of physical NICs.

Formula- (“LSB of Source IP of VM” xor “LSB of Destination IP”) mod “Number of Physical NICs in the team” = Modulo. If the Modulo is the same among two VM’s they will choose the same physical NIC for traffic. If the Modulo is different, they will choose different physical NICs.

IP-Hash Advantage: Ingress and Egress load balancing. IP-Hash Disadvantage: Upstream dependencies. More complexity and planning involved.

For further detail beyond Twitter see Ken Cline’s excellent post on Network Load Balancing here -> http://bit.ly/e7eVK0

 

Ask Harley: Question 3 – What are some of the most difficult parts of the journey to becoming a VCDX?

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

Question:

What are some of the most difficult parts of the journey to becoming a VCDX?

Answer:

I can only speak from my experience on the VCDX journey.

If you are well prepared and study the written portion of the tests (and to some extent the lab), while challenging are nothing compared to the application and defense.

The application and design submission itself requires a significant amount of work.

Whatever you calculate the work effort to be, you may want to double or quadruple it.

I spent about four weeks worth of man-hours on my application and design.

Make sure you meet the application requirements in your design documentation and then go beyond. Leave nothing blank.

Know your design for the defense. This is worth repeating. Know your design cold for the defense. Nothing can prepare you for the defense other than knowing your design and significant field experience.

You don’t know what the panelists will throw at you, so you must have a breadth of knowledge.

By far the most challenging of all may be getting a handle on your nerves during the panel defense.

My detailed experience is here -> http://goo.gl/Y0tPD

There is also a nice roundup of experiences here -> http://bit.ly/eeycor

 

Ask Harley: Question 4 – Are there significant performance advantages on ESXi using Dell Equalogic MPIO drivers over native VMware Round Robin drivers?

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

Question:

Are there significant performance advantages on ESXi using Dell Equalogic MPIO drivers over native VMware Round Robin drivers?

Answer:

I have not tested the performance of the Dell Equalogic MPIO drivers and a quick search did not net any official benchmarks. In general, a properly implemented and tested third party MPIO solution like the Equalogic MEM or Powerpath VE should be better at making path selections.

The storage vendor should be more familiar with its array than VMware. I have experience with Powerpath VE and the installation is fairly easy. Once it is installed there is very little management besides just updating the software from time to time.

Any other third party plugin should have a similar ease of use / management story.Consult the vendor.

I did find one unofficial performance testing post here -> http://bit.ly/hkbFv8

 

Ask Harley: Question 5 – What about using multiple vendor MPIO drivers, have you ever experienced issues in a mixed environment?

 

 

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

 

Question:

What about using multiple vendor MPIO drivers, have you ever experienced issues in a mixed environment?

 

Answer:

 

I have not tested using multiple MPIO drivers.

However, I would not recommend that scenario as a host can only have one path selection policy.

If you have multiple hosts using different path selection policies, then performance or availability can be impacted. You should always use the same path selection policy for all of the hosts in a cluster to avoid path confusion.

Consistency is the key in a virtual infrastructure

 

Ask Harley: Question 6: With Cisco load balancers in places, do I still specify direct URL’s to the two security servers for #VMwareView or use LB URL?

NOTE: This answer was originally provided over a series of Tweets by Harley Stagner on 1/13/11 at TBL Networks’ Twitter site as part of our “Ask Harley” series.

Question:

With Cisco load balancers in places, do I still specify direct URL’s to the two security servers for #VMwareView or use LB URL?

Answer:

With Cisco load balancers in front of the view security servers, you would specify the load balancer URL.

Solving the Virtual Desktop Puzzle Part 2

In part 1 of this series, we explored the possibilities that VMware View’s linked clones technology unlocks. We can begin move closer to deploying a single “gold” image with this technology and managing only that “gold” image. That is a very powerful prospect. However, if we truly want to get to that state, some other items in the image need to be offloaded. This post will discuss strategies to offload the user data from the virtual desktop images.

First, let’s define what typically can be found as part of the user data.

  • My documents
  • Desktop
  • Application Data
  • Shortcuts
  • Basically any “user” customization data that makes that desktop unique to the user

If the user data is part of the virtual desktop image, then the virtual desktop is not disposable (from the point of view of the user, at least 🙂 ). We need to store the user data somewhere else if we do not want to lose it if the virtual desktop is refreshed, recomposed, or provisioned again. There are several ways to tackle this particular design consideration. Let’s go over a few of them.

First, the built in Windows methods.

Roaming Profiles

Pros:

  • Built in to Windows
  • Well understood
  • Capable of offloading the entire user profile, including files for third party applications (e.g. Favorites for third party browsers like Firefox.)

Cons:

  • Downloads the entire user profile every time a user logs on
  • Large profiles can cause very long logon times for users
  • The virtual disk on the virtual desktop image will grow with the profile data every time a user logs on
  • Cannot really be monitored for consistency or functionality
  • May be problematic when upgrading from an older Operating System (like Windows XP) to a new Operating System (like Windows 7) due to profile incompatibilities.

Even though it is the first listed, I would actually recommend roaming profiles as a last resort. Long time Windows administrators know the frustrations of roaming profiles. Dealing with roaming profile problems may lessen the operational efficiencies gained by deploying a virtual desktop environment in the first place.

Folder Redirection

Pros:

  • Built in to Windows
  • Well understood
  • Folders redirected truly reside completely off of the virtual desktop image
  • Logon times are not an issue like they can be with roaming profiles

Cons:

  • Does not take care of the entire user profile. Third party application customizations (like Favorites for third party browsers like Firefox) may or may not be redirected depending on where that data is stored.
  • Cannot really be monitored for consistency or functionality
  • May be problematic when upgrading from an older Operating System (like Windows XP) to a new Operating System (like Windows 7) due to folder differences.

I have used Folder Redirection many times in different environments. When set up properly it works reasonably well. My wish list for improvement would be the ability to audit when a user does not have their folder redirected to avoid any user data loss.

Outside of built in Windows solutions, there are several third party solutions that are trying to tackle the “user identity” offloading consideration. These solutions vary in functionality, complexity, and price. So, I will just list the general Pros and Cons with this category of software solution.

Third Party Profile Management

Pros:

  • Profile management is what it does. It had better be good at it 🙂
  • May have more robust monitoring of the user data for consistency and functionality
  • May have the ability to seamlessly migrate user data from an older Operating System (like Windows XP) to a newer Operating System (like Windows 7)
  • Can be a more robust profile management solution vs. built in Windows tools
  • Will likely scale more efficiently than built in Windows tools

Cons:

  • May add more complexity
  • Added price
  • Not all profile management is created equal, research must be done to ensure that the solution fits the need for your environment. (At least with Roaming Profiles and Folder redirection you know exactly what you are getting and not getting)

As you can see, we must offload user data if the virtual desktop environment is going to be as efficient as possible. Fortunately, there are many ways to accomplish this goal. Part 3 of this series will go over offloading the applications from the virtual desktop image. Until then, if you have any comments or questions feel free to post them.