window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

Running a Lean Branch Office with the Cisco UCS Express

Centralized management brings organizations more control over resources with fewer equipment assets in the field. There are many cases where equipment may be needed in a branch office to speed access time to a resource or eliminate the dependency on a network link to the central datacenter. It is very common to see at least one, if not multiple, servers at the branch office to provide file/print services or user authentication. Perhaps the servers are providing some service that is specialized to a particular business (banking applications come to mind here). Whatever service is being provided, sometimes it is better to maintain local access at the branch. So there are servers to maintain at the branch office, as well as networking gear and other such devices.

What if you could consolidate your branch office services with your router? That is exactly what the Cisco UCS Express is meant to do. The UCS Express is a Services-Ready Engine (SRE) module that works in Integrated Services Router Generation 2 (ISR G2) routers. This module is a server that you can run VMware ESXi on to provide branch office services. Here is an example of an ISR G2 device:

 

Cisco UCS Express ISR G2 port schematics

 

The slots you see at the bottom of the device is where the SRE UCS Express modules are located. A UCS Express module is seen below.

 

Cisco UCS Express main schematics

 

Here are a couple of the highlights of this architecture:

  • (1) or (2) 500 GB drive options are available (hot swap hard drive)
  • (1) or (2) Core CPU’s are available
  • 4 or 8GB of RAM available
  • iSCSI Initiator Hardware offload if you need to connect to an external iSCSI device
  • There is direct SRE to LAN connectivity which reduces cabling
  • Maintenance is covered under SMARTnet

This architecture provides all that a branch office may need by virtualizing several branch office services onto the SRE UCS Express Module. The ESXi instance can be managed centrally by your existing vCenter installation. This gives you the benefits of local service access and centralized management while reducing the equipment needs at the branch office. Pretty slick.

If you would like to discuss how this architecture might be able to help your organization or want further technical details, please feel free to contact me.

A Structured Virtual Infrastructure Approach Part III: Compute Platform Software

In Part II of the Structured Virtual Infrastructure Approach Series, we explored the Cisco Unified Computing System (UCS) hardware. This post will explore the UCS management software. Up to 20 chassis can be managed with a single instance of the UCS Manager. The UCS Manager is included with the 6100 series fabric interconnects. All of the blades in the infrastructure can be managed through this single interface. Below, we’ll discuss some of the features that make this interface unique among compute platform management interfaces.

Complete Compute Infrastructure Management

  • All the chassis / blades (up to 20 chassis worth) are managed in this single interface.
  • The management is not “per chassis” like legacy blade systems.
  • Consolidated management means efficient management for the entire compute platform.

Service Profiles

  • All of the items that make a single server (blade) unique are abstracted with a service profile.
  • This may include WWN, MAC, Bios Settings, Boot Order, Firmware Revisions, etc.
  • WWN’s and MAC’s are pulled from a pool that can be defined.
  • Even the KVM management IP’s are pulled from a pool so the administrator does not have to manage those IP’s at all.
  • You can create a Service Profile template with all of these characteristics and create Service Profiles from the template.
  • When you need to deploy a new blade all of the unique adjustments are already completed from the Service Profile template.
  • With the M81KR Virtual Interface Card (VIC) the number of interfaces assigned to a blade can be defined in the Service Profile template.
  • Even though the a single mezzanine card in a blade will only have (2) 10Gb ports, the M81KR VIC allows you to define up to 56 FC/Ethernet ports. This allows for more familiar vSphere Networking setups like the one below:

image

The diagram above is a setup that can be used with the Cisco Nexus 1000v. It would be impossible to do this setup on the UCS B-Series without the M81KR VIC. We’ll explore why a networking setup like this may be necessary when we get to the vSphere specific posts in this series.

Role Based Access Control

  • Even though the components are converging (storage, compute, networking) the different teams responsible for those components can still maintain access control for their particular area of responsibility.
  • The UCS manager has permissions that can be applied in such a way that each team only has access to the administrative tab(s) that they are responsible for.
  • Network team –> Network Tab, Server Team –> Server Tab, Storage Team –> Storage Tab.
  • The UCS manager also supports several different authentication methods, including local and LDAP based authentication.

What vSphere does for the Operating System instances, the UCS does for the blade hardware. It abstracts the unique configuration items into a higher software layer so that they can be easier managed from a single location.  The next post in this series will take a look at some storage platform hardware. It’s not just about carving out disks for the virtual infrastructure any longer. We’ll take a look at some options that integrate well with a modern vSphere infrastructure.

A Structured Virtual Infrastructure Part II: Compute Platform Hardware

In Part I of this series, I discussed some design options for a virtual infrastructure (Traditional Rackmount, Converged Rackmount, and Converged Blade). Using the Converged Blade option as the model going forward, we’ll explore the individual components of this solution in more detail. This post will explore the Compute Platform (UCS B-Series) in more detail.

 

image

Let’s start with the “brains” of the UCS B-Series, the 6100 series fabric interconnects.

6100 Series Fabric Interconnects:

Interconnect / Module Options:

  • 6120XP – (20) 10gbE and FCoE capable SFP+ port Fabric Interconnect with a single expansion module slot
  • 6140XP – (40) 10gbE and FCoE capable SFP+ port Fabric Interconnect with two expansion module slots

Expansion Module Options:

  • 10Gbps SFP+ – (6) ports
  • 10Gbps SFP+ – (4) ports, 1/2/4 Gbps Native Fibre Channel SFP+ – (4) ports
  • 1/2/4 Gbps Native Fibre Channel SFP+ – (8) ports
  • 2/4/8 Gbps Native Fibre Channel SFP+ – (6) ports

Below is a diagram of the UCS 6120XP labeled with the different ports:

image

A redundant UCS system consists of two 6100 series devices. They are connected via the cluster ports to act in unison. If a port or the whole 6100 series were to fail, the other would take over.

The 6100 series fabric interconnects provide 10Gb connectivity to the UCS 5108 via a module called the 2104XP fabric extender. A single pair of 6100 series fabric interconnects can manage up to twenty chassis depending on your bandwidth needs per chassis.

5108 Chassis (8 blade slots per chassis):

image

As you can see from the diagram you have two 2104XP Fabric Extenders per chassis. Each of the 2104XP’s have four 10Gb ports for a total of up to 80Gbps of throughput for the chassis. So, there is plenty of bandwidth and the added benefit of fewer cables and consequently, easier cable management. The only cables that will ever be needed for the back of the 5108 chassis are up to eight cables for connectivity and up to 4 cables for power.

Since the bandwidth needed for the external network is calculated at the Fabric Interconnect level, all that is needed at that point is to calculate the computing needs for the workload (CPU and RAM). This is where the blades themselves come in.

Blade Options:

  • Full-Width Blades (B250 M1, B440 M1) take up two chassis slots each
  • Half-Width Blades (B200 M2, B230 M1) take up one chassis slot each
  • You can have a combination of blades in the same chassis up to eight chassis slots
  • The blade processor configurations range.
  • The B2xx blades have either 4, 6, or 8 core processors in a dual socket configuration
  • The B440 M1 can hold up to (4) sockets of the Intel 7500 series 8 core processors
  • The B250 M1 holds up to 384GB of RAM in a full-width form factor
  • The B200 M2 holds up to 96GB of RAM in a half-width form factor
  • The B230 M1 holds up to 256GB of RAM in a half-width form factor
  • The full-width servers can hold up to (2) mezzanine cards for connectivity. Each card has (2) 10Gb ports for connectivity. That’s 20Gbps per card.

M81KR Virtual Interface Card:

The M81KR Virtual Interface Card deserves a special mention. This mezzanine card is capable of dividing the (2) 10Gbps ports into a combination of up to 56 virtual Ethernet and Fibre channel ports. This way you can manage port isolation and QoS for your blades like you may be used to in a traditional rackmount virtual infrastructure. As these posts continue, we will explore why this functionality may be needed for the virtual infrastructure when using the Converged Blade Infrastructure Model.

This post explored some of the Compute Platform Hardware components. The next post in this series will explore some of the software components and management that make the UCS compute platform ideal for a structured virtual infrastructure that can scale incrementally and be managed easily.