window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

Blog

A Structured Virtual Infrastructure Approach Part III: Compute Platform Software

By admin | Jun 27, 2011 | Insights

In Part II of the Structured Virtual Infrastructure Approach Series, we explored the Cisco Unified Computing System (UCS) hardware. This post will explore the UCS management software. Up to 20 chassis can be managed with a single instance of the UCS Manager. The UCS Manager is included with the 6100 series fabric interconnects. All of the blades in the infrastructure can be managed through this single interface. Below, we’ll discuss some of the features that make this interface unique among compute platform management interfaces.

Complete Compute Infrastructure Management

  • All the chassis / blades (up to 20 chassis worth) are managed in this single interface.
  • The management is not “per chassis” like legacy blade systems.
  • Consolidated management means efficient management for the entire compute platform.

Service Profiles

  • All of the items that make a single server (blade) unique are abstracted with a service profile.
  • This may include WWN, MAC, Bios Settings, Boot Order, Firmware Revisions, etc.
  • WWN’s and MAC’s are pulled from a pool that can be defined.
  • Even the KVM management IP’s are pulled from a pool so the administrator does not have to manage those IP’s at all.
  • You can create a Service Profile template with all of these characteristics and create Service Profiles from the template.
  • When you need to deploy a new blade all of the unique adjustments are already completed from the Service Profile template.
  • With the M81KR Virtual Interface Card (VIC) the number of interfaces assigned to a blade can be defined in the Service Profile template.
  • Even though the a single mezzanine card in a blade will only have (2) 10Gb ports, the M81KR VIC allows you to define up to 56 FC/Ethernet ports. This allows for more familiar vSphere Networking setups like the one below:

image

The diagram above is a setup that can be used with the Cisco Nexus 1000v. It would be impossible to do this setup on the UCS B-Series without the M81KR VIC. We’ll explore why a networking setup like this may be necessary when we get to the vSphere specific posts in this series.

Role Based Access Control

  • Even though the components are converging (storage, compute, networking) the different teams responsible for those components can still maintain access control for their particular area of responsibility.
  • The UCS manager has permissions that can be applied in such a way that each team only has access to the administrative tab(s) that they are responsible for.
  • Network team –> Network Tab, Server Team –> Server Tab, Storage Team –> Storage Tab.
  • The UCS manager also supports several different authentication methods, including local and LDAP based authentication.

What vSphere does for the Operating System instances, the UCS does for the blade hardware. It abstracts the unique configuration items into a higher software layer so that they can be easier managed from a single location.  The next post in this series will take a look at some storage platform hardware. It’s not just about carving out disks for the virtual infrastructure any longer. We’ll take a look at some options that integrate well with a modern vSphere infrastructure.

Leave a Reply

Your email address will not be published. Required fields are marked *