window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-16803030-1');

Limelight Support

Why keep scrambling to work through technology outages when TBL Networks can help prevent them from happening in the first place? Support shouldn’t just be reactionary – at TBL, we believe it should be preventative. We maintain your network for you and apply critical patches as needed. We measure our success not in how fast we react to a problem, but by how many problems we avoid by properly managing your system.

Have Some Peace of Mind

In the event a problem does arise, you’ll have peace of mind knowing expert engineering resources are at your disposal all day, every day. As a TBL Networks customer, you have priority access to our engineers via a dedicated phone number, email address, and web portal. While most support organizations’ clocks start when someone gets around to opening a ticket, our SLA begins the moment contact is made. A member of the TBL support team establishes remote access and is actively working towards resolution within one hour.

Keep a Pulse on Your System’s Health

TBL’s Limelight Support is priced at a fixed monthly rate that delivers proactive monitoring, support services and configuration backup. You’ll even be able to keep a pulse on how your system is performing. We provide an onsite monitoring appliance that collects detailed
device vital signs. Your system’s real-time health and historical trending are displayed on a dashboard that you can view in any browser or through our iPad App. We’ll offer you even more proof of your system’s vigor by providing quarterly business reviews and executive reports.

TBL hands down, has some of the best engineers available anywhere in the world.

Global Corporation, IT Analyst

Be Strategic With Your Resources

We want to help you with a solution that can intimately integrate with your staff and processes. That’s why we free up your resources so they can focus on the tasks that grow your business. Let us drive the operational lifecycle, bolstering your IT staff’s capacity to deploy
the next big thing. Be strategic with your resources – leverage TBL’s Limelight Support.

Download the PDF

Contract Lifecycle Management

Every manufacturer has them. They are a quintessential component of ongoing support and advanced hardware replacement that every IT department needs. Maintenance contracts from the manufacturer are a necessary component of any organization’s business continuance. But along with all the benefits comes a plethora of hassles for most companies.

Avoid Complexity

Ideally, organizations would have a minimum number of contracts from each manufacturer, with all products sharing a common renewal date. However, the dedication needed to achieve and maintain this Zen-like state of contract renewal is not prevalent among resellers. In fact, TBL has found it common for many organizations to have a contract number for every new purchase, with individual one-year periods of coverage starting from every new purchase date. At best, this results in frequent renewal requests from various resellers multiple times a year that makes budgeting cumbersome. At worst, this continual cycle of multiple purchases and multiple renewals ultimately leads to critical devices experiencing a lapse in support and needless coverage for decommissioned assets.

I would recommend TBL to anyone in need of their services.

Financial Corporation, IT Director

Vanguards of Your Best Interest

At TBL, we are stewards of the contracts our clients entrust us to manage on their behalf. A contract remediation project begins with TBL inventorying all current products in production and reconciling them against the current records of that client’s contracts. A plan is developed to consolidate duplicate contract types into singular master contracts by type, with all items across all contracts synchronized to simultaneously co-terminate coverage. TBL tracks every item on every contract via our proprietary salesforce.com instance, delivering this data directly to our Customer Portal.

Download the PDF

Limelight WiFi

Connectivity Without Complexity

Wireless connectivity is essential in today’s mobile environment. Employees no longer want to be tethered to their desks, but be free to roam and collaborate amongst the office. Enterprise wireless networks are complex to configure and maintain, requiring a mastery of an alphabet soup of terminology. Take your wireless network to the cloud, and enjoy connectivity without the hassle.

Zero-Touch Deployment

With TBL’s Limelight WiFi, equipment will be shipped directly to your location without the need for additional programming. Once connected to the network, the access point calls home to obtain its configuration. All intelligence of the wireless network is stored securely in the cloud. Deploy dependable wireless access as fast as you can plug in the cable.

Surviving the iPad Crush

As more wireless-only devices connect, the requirements to provide a stable and high performing wireless network are ever increasing. Limelight WiFi leverages the latest wireless technologies to perform at maximum data rates. An embedded stateful firewall segregates and restricts network access from unauthorized devices. Corporate sponsored Android and Apple iOS devices can even be controlled via a built-in Mobile Device Manager to ensure they stay in compliance with corporate policy.

Everything worked perfectly after the move. I commend the team.

Higher Education, Director of IT

Guest Access Without the Guessing

Provide secure guest access with customized landing pages and gain insight into valuable customer analytics. Know who your guests or customers are, where they go, and how often they come back. Allow guests to gain access to the internet by checking in on your corporate Facebook page or by simply accepting your acceptable use policy. Eliminate that ever-changing guest password from being printed on the whiteboard of every room in the building.

Download the PDF

TBL Networks Announces Addition of Director of Business Transformation

RICHMOND, Virginia – March 18, 2015 – TBL Networks, a Cisco Gold Certified Partner headquartered in Central Virginia, announced the addition of Phil Stull to their team as the company’s Director of Business Transformation. With nearly two decades of focused data center experience, Stull will aid in the growth of TBL Networks and the adoption of their Limelight service offerings.

“Many businesses come to us looking for ways to strategically leverage technology. Phil’s experience and knowledge cover both sides of that equation. He’s business-minded, but also has twenty years of technology experience under his belt,” said Alan Sears, President and CEO of TBL Networks.

As the Director of Business Transformation, Stull is TBL’s technology evangelist. He serves as an advisor for local professionals who are interested in learning how TBL’s Limelight PCaaS (Private Cloud as a Service) can be directly tied to their business drivers.

“Business and technology are no longer two separate entities. They need to be purposefully and thoughtfully linked together in order to achieve high level goals. Phil offers our clients immense insight in both of these areas,” added Sears. “He understands how they work and, more importantly, how they can work together.”

Before coming to TBL Networks, Stull worked at Capital One’s IT headquarters, where he was praised as a technology leader for his work trimming IT expenses and driving innovation. Stull has a proven ability to implement technologies that result in competitive business advantages.

Phil Stull, Director of Business Transformation
Phil Stull, Director of Business Transformation

About TBL Networks, Inc.
TBL Networks is about moving forward with innovative technology. TBL empower clients’ collaboration strategy, virtualization and datacenters to do more with less. TBL delivers these advanced solutions directly where it counts the most – the desktop. Building secure and reliable solutions that introduce efficiencies in human interaction is how we see the future. For more information about TBL Networks visit: www.theblinkylight.com.

Follow @TBLNetworks

Become a fan of TBL on Facebook: https://www.facebook.com/tblnetworks
Follow TBL on LinkedIn: https://www.linkedin.com/company/tbl-networks
Read our Blog: http://tblnetworks.wpengine.com/opinions/

TBL Contact:

Alan Sears, President + CEO, asears@tblnetworks.com, 804-822-3641

CJ’s Thumbs Up Foundation: An Exclusive Club – Guest Blog Post

TBL Networks is proud to present guest blogger Rachel Reynolds, Executive Director at CJSTUF.

Many of you belong to clubs. Some of those clubs were easy to join. You just had to sign up, pay some small annual dues, and enjoy the benefits of membership. Other clubs are more exclusive: the country club, the garden club, fraternal organizations.  All of these clubs carry with them a common cause, a sense of belonging, and a certain set of privileges for membership.

On January 20, 2009, our family joined an exclusive club in which nobody wants membership.  That was the day our daughter Charlotte was diagnosed with cancer. We became one of the many families affected by chronic and life threatening illness.

Each school day, 47 families nationwide join this club when a child is diagnosed with cancer.  Each day, up to 75 children seek treatment on the 7th floor of Children’s Hospital of Richmond at MCV. Many of these children are frequent flyers, making multiple visits every year and requiring care for illnesses such as cystic fibrosis, sickle cell anemia, tumors, leukemia, other types of cancer, and complications due to HIV.

The dues in this club are steep.  They include draining of emotional energy, sleeplessness, medical bills, financial hardship due to job loss or a leave of absence from work and excessive weight gain (or loss).  Sometimes, you pay the ultimate price: the life of your child.

At CJ’s Thumbs Up Foundation, our primary goal is to help the families who have joined this club.  In a way, we help pay their dues.  In our first two years, we provided over $30,000 in financial assistance to over 60 families.  This year, we are poised to do the same.  Unfortunately, membership in the club continues to grow. CJSTUF currently has a waiting list of families requesting financial assistance and we do not expect this demand to wane.

When individuals and businesses like TBL Networks support CJSTUF with their financial contribution, they help families pay their dues to a club they never wanted to join.  With your contribution, you help Roger and I pay forward the kindness that we received during our darkest hours in 2009.  With your contribution, you can help a family in your own community in their time of need.

If you would like to know more about our organization, visit our website for information on how to get involved.  You can also follow us on Twitter and Facebook or hook into our blog for the latest updates.

Stretched Clusters: Use Cases and Challenges Part I – HA

I have been hearing a lot of interest from my clients lately about stretched vSphere clusters. I can certainly see the appeal from a simplicity standpoint. At least on the surface. Let’s take a look at the perceived benefits, risks, and the reality of stretched vSphere clusters today.

First, let’s define what I mean by a stretched vSphere cluster. I am talking about a vSphere  (HA / DRS) cluster where some hosts exist in one physical datacenter and some hosts exist in another physical datacenter. These datacenters can be geographically separated or even on the same campus. Some of the challenges will be the same regardless of the geographic location.

To keep things simple, let’s look at a scenario where the cluster is stretched across two different datacenters on the same campus. This is a scenario that I see attempted quite often.

 

image

 

This cluster is stretched across two datacenters. For this example let’s assume that each datacenter has an IP-based storage array that is accessible to all the hosts in the cluster and the link between the two datacenters is Layer 2. This means that all of the hosts in the cluster are Layer 2 adjacent. At first glance, this configuration may be desirable because of its perceived elegance and simplicity. Let’s take a look at the perceived functionality.

  • If either datacenter has a failure, the VM’s should be restarted on the other datacenter’s hosts via High Availability (HA).
  • No need for manual intervention or something like Site Recovery Manager

Unfortunately, perceived functionality and actual functionality differ in this scenario. Let’s take a look at an HA failover scenario from a storage perspective first.

  • If virtual machines failed over from hosts in one datacenter to hosts in another datacenter, the storage will still be accessed from the originating datacenter.
  • This will cause storage that is not local to the datacenter to be accessed by hosts that are local to the datacenter as shown in the diagram below.

image

This situation is not ideal in most cases. Especially if the datacenter is completely isolated. Then the storage cannot be accessed anyway. Let’s take a look at what happens when one datacenter loses communication with the other datacenter, but not with the datacenter’s local hosts. This is depicted in the diagram below.

image

  • Prior to vSphere 5.0, if the link between the datacenters went down or some other communication disruption happened at this location in the network, each set of hosts would think that the others were down. This is a problem because each datacenter would attempt to bring the other datacenter’s virtual machines up. This is known as a split-brain scenario.
  • As of vSphere 5.0, each datacenter would create its own Network Partition from an HA perspective and proceed to operate as two independent clusters (although with some limitations) until connectivity was restored between the datacenters.
  • However, this scenario is still not ideal due to the storage access.

So what can be done? Well, beyond VM to Host affinity rules, if the sites are truly to be active / standby (with the standby site perhaps running lower priority VM’s), the cluster should be split into two different clusters. Perhaps even different vCenter instances (one for each site) if Site Recovery Manager (SRM) will be used to automate the failover process. If there is a use case for a single cluster, then external technology needs to be used. Specifically, the storage access problem can be addressed by using a technology like VPlex from EMC. In short, VPlex allows one to have a distributed (across two datacenters) virtual volume that can be used for a datastore in the vSphere cluster. This is depicted in the diagram below.

 

image

A detailed explanation of VPlex is beyond the scope of this post. At a high level, the distributed volume can be accessed by all the hosts in the stretched cluster. VPlex is capable of keeping track of which virtual machines should be running on the local storage that backs the distributed virtual volume. In the case of a complete site failure, VPlex can determine that the virtual machines should be restarted on the underlying storage that is local to the other datacenter’s hosts.

Technology is bringing us closer to location aware clusters. However, we are not quite there yet for a number of use cases as external equipment and functionality tradeoffs need to be considered. If you have the technology and can live with the functionality tradeoffs, then stretched clusters may work for your infrastructure. The simple design choice for many continues to be separate clusters.

Back to the Basics with Virtualization Capacity Planning

To be sure, there are plenty of new features to get excited about in vSphere 5.0. VMware has come a long way since 2002, when I first started using the technology. Often in the technology world, practitioners get excited about learning and implementing new technology without planning properly. They want to implement as fast as possible to bring about the benefits and innovation that the new technology has to offer. I believe that we have all been guilty of this at one point. So, this post is to remind all technology practitioners to take a step back and think about proper planning when implementing new technology projects. One of the basic tasks that should be done at the beginning of any virtualization design is capacity planning.

My role at TBL allows me to examine many virtual infrastructures. One of the common challenges that I see in many of these infrastructures is resource allocation after they have been running for a while. Workloads were virtualized quickly without proper capacity planning and by the team I am called in to assess the infrastructure, resources are strained in the environment. This point may come quickly if proper capacity planning is not performed up front. However, ongoing capacity planning must be performed periodically as moves, adds, and changes occur in the virtual infrastructure. Below are a few general recommendations for proper capacity planning:

  • Plan for performance, then capacity for production workloads – I have seen the opposite happen too many times to count. The storage capacity is planned for, but not the storage performance. Look at all workloads that will be virtualized. Measure the peak IOPS that will be required. Plan to fulfill the IOPS requirements, then add disks if necessary to meet the capacity requirements. This general approach will ensure a solid performance foundation.
  • Plan for peaks, not just averages – If you plan for averages, the environment may run OK until a performance spike is encountered. Then, performance may suffer for a critical workload. Think about things like month-end processing, student enrollment, or sales peaks. These times are when the environment needs the resources the most. Plan for the peaks accordingly.
  • Don’t forget about overhead – In a virtual infrastructure, there are some files and associated overhead required by the system to run the virtual workloads. These files may not seem like much by themselves, but in aggregate, they can add up to a lot. An example of something to plan for might be virtual machine swap files. In a vSphere infrastructure the virtual machine swap file size is the difference between the assigned memory and reserved memory for a virtual machine.

Ongoing capacity planning is needed as well to maintain a virtual infrastructure. This is where tools like vCenter Capacity IQ can help. Capacity IQ is capable of performing ongoing capacity planning, reporting, what-if scenarios and more. For example, if you want to see at what point you need to add more capacity in your infrastructure, Capacity IQ can model that based on your deployment patterns in the past. This is a very powerful analytic tool that can help you stay ahead of your capacity needs.

If we can plan from the beginning and utilize intelligent ongoing planning for capacity, then we can move from a reactive stance to a proactive stance while still being able to provide innovation quickly for the business. That’s a powerful combination. If you have questions about capacity planning, please feel free to contact me.

Running a Lean Branch Office with the Cisco UCS Express

Centralized management brings organizations more control over resources with fewer equipment assets in the field. There are many cases where equipment may be needed in a branch office to speed access time to a resource or eliminate the dependency on a network link to the central datacenter. It is very common to see at least one, if not multiple, servers at the branch office to provide file/print services or user authentication. Perhaps the servers are providing some service that is specialized to a particular business (banking applications come to mind here). Whatever service is being provided, sometimes it is better to maintain local access at the branch. So there are servers to maintain at the branch office, as well as networking gear and other such devices.

What if you could consolidate your branch office services with your router? That is exactly what the Cisco UCS Express is meant to do. The UCS Express is a Services-Ready Engine (SRE) module that works in Integrated Services Router Generation 2 (ISR G2) routers. This module is a server that you can run VMware ESXi on to provide branch office services. Here is an example of an ISR G2 device:

 

Cisco UCS Express ISR G2 port schematics

 

The slots you see at the bottom of the device is where the SRE UCS Express modules are located. A UCS Express module is seen below.

 

Cisco UCS Express main schematics

 

Here are a couple of the highlights of this architecture:

  • (1) or (2) 500 GB drive options are available (hot swap hard drive)
  • (1) or (2) Core CPU’s are available
  • 4 or 8GB of RAM available
  • iSCSI Initiator Hardware offload if you need to connect to an external iSCSI device
  • There is direct SRE to LAN connectivity which reduces cabling
  • Maintenance is covered under SMARTnet

This architecture provides all that a branch office may need by virtualizing several branch office services onto the SRE UCS Express Module. The ESXi instance can be managed centrally by your existing vCenter installation. This gives you the benefits of local service access and centralized management while reducing the equipment needs at the branch office. Pretty slick.

If you would like to discuss how this architecture might be able to help your organization or want further technical details, please feel free to contact me.

Memory Management in vSphere – Where we are at today

This is a quick blog to discuss where vSphere is at with memory management today. vSphere has many mechanisms to reclaim memory before resorting to paging to disk. Let’s briefly look at these methods.

 

Memory Reclamation

  • Transparent Page Sharing (TPS)
    • Think of this as deduplication for memory. Identical pages of memory are shared with many VM’s instead of provisioning a copy of that same page to all VM’s. This can have a tremendous impact on the amount of RAM used on a given host if there are many identical pages.
  • Balooning
    • This method increases the memory pressure inside the guest so that memory that is not being used can be reclaimed. If the hypervisor were to just start taking memory pages from guests, the guest Operating Systems would not react positively to that. So, balooning is a way to place artificial pressure on the guest VM so that the VM pages unused memory to disk. Then, the hypervisor can reclaim that memory without disrupting the guest OS.
  • Memory compression
    • This method attempts to compress memory pages that would normally be swapped out via hypervisor swapping. This is preferable to swapping as there can be a performance impact when memory is swapped to disk.
  • Hypervisor swapping
    • This is the last resort for memory management. The memory pages are swapped to disk. New in vSphere 5 is the support for swapping these memory pages to SSD’s. This increases the performance when swapping is needed.

As you can see there are many memory management techniques in vSphere that allow greater consolidation ratios. The hypervisor in the virtual infrastructure does much more than just host guest VM images. There is a lot going on under the hood to consider before choosing a specific hypervisor to serve as the foundation for your infrastructure. Feel free to contact me if you would like to discuss any of the “under the hood” features of vSphere.

End User Computing with VMware

The desktop PC is dead! Finally!

Well, not yet, but VMware is sure working hard to make this a reality. I have been discussing with clients and colleagues why the traditional desktop model does not make sense for “today’s” end user for quite a few “todays.” VMware calls a user-centric approach to computing End User Computing. End users need access to their applications and information on any device from anywhere. They should not know or care about the nuances of the Operating System. This sounds like a lofty goal, but it is becoming a reality more and more every year.

If we look at the last decade (or even further into the 90’s), we have seen the Operating System itself have the spotlight. New “Operating System” features were actually marketed towards end users.

  • The latest OS supports more RAM!
  • The latest OS supports 64-bit computing!
  • The latest OS supports Solid State Flash Drives!
  • The latest OS can take advantage of a USB drive to cache your file searches and access! (What?)

I can’t think of one real end user (IT folks don’t count, sorry) that cares about any of the above. Operating System, you had your chance in the spotlight. It’s time to fade into the background where you belong. End users care about their applications to get work and play done. Operating Systems just get in the way of delivering those applications more often than not.

Virtual Desktop Infrastructure certainly eases the pain of managing the Operating System for the IT Administrator while still giving the end user a computing experience that they are accustomed to. For some end users the desktop that they are accustomed to will be adequate. For some users, IT needs to deliver a computing experience beyond what they are accustomed to. No start menus, no shortcuts, no c: drive, d: drive, etc. What I’m talking about is an end user experience where the applications take front and center on any device from anywhere. This is exactly the type of technology that VMware has been working on and was previewed at VMworld 2011. They are calling it Project Appblast.

Imagine being able to launch any application (including Windows Applications) using nothing more than an HTML-5 compliant browser. Below is a technical preview of this exciting project from VMware.

The desktop PC and Operating System’s days are numbered. Bring on the apps.