To be sure, there are plenty of new features to get excited about in vSphere 5.0. VMware has come a long way since 2002, when I first started using the technology. Often in the technology world, practitioners get excited about learning and implementing new technology without planning properly. They want to implement as fast as possible to bring about the benefits and innovation that the new technology has to offer. I believe that we have all been guilty of this at one point. So, this post is to remind all technology practitioners to take a step back and think about proper planning when implementing new technology projects. One of the basic tasks that should be done at the beginning of any virtualization design is capacity planning.
My role at TBL allows me to examine many virtual infrastructures. One of the common challenges that I see in many of these infrastructures is resource allocation after they have been running for a while. Workloads were virtualized quickly without proper capacity planning and by the team I am called in to assess the infrastructure, resources are strained in the environment. This point may come quickly if proper capacity planning is not performed up front. However, ongoing capacity planning must be performed periodically as moves, adds, and changes occur in the virtual infrastructure. Below are a few general recommendations for proper capacity planning:
- Plan for performance, then capacity for production workloads – I have seen the opposite happen too many times to count. The storage capacity is planned for, but not the storage performance. Look at all workloads that will be virtualized. Measure the peak IOPS that will be required. Plan to fulfill the IOPS requirements, then add disks if necessary to meet the capacity requirements. This general approach will ensure a solid performance foundation.
- Plan for peaks, not just averages – If you plan for averages, the environment may run OK until a performance spike is encountered. Then, performance may suffer for a critical workload. Think about things like month-end processing, student enrollment, or sales peaks. These times are when the environment needs the resources the most. Plan for the peaks accordingly.
- Don’t forget about overhead – In a virtual infrastructure, there are some files and associated overhead required by the system to run the virtual workloads. These files may not seem like much by themselves, but in aggregate, they can add up to a lot. An example of something to plan for might be virtual machine swap files. In a vSphere infrastructure the virtual machine swap file size is the difference between the assigned memory and reserved memory for a virtual machine.
Ongoing capacity planning is needed as well to maintain a virtual infrastructure. This is where tools like vCenter Capacity IQ can help. Capacity IQ is capable of performing ongoing capacity planning, reporting, what-if scenarios and more. For example, if you want to see at what point you need to add more capacity in your infrastructure, Capacity IQ can model that based on your deployment patterns in the past. This is a very powerful analytic tool that can help you stay ahead of your capacity needs.
If we can plan from the beginning and utilize intelligent ongoing planning for capacity, then we can move from a reactive stance to a proactive stance while still being able to provide innovation quickly for the business. That’s a powerful combination. If you have questions about capacity planning, please feel free to contact me.
I recently spoke at a lunch and learn event about “Security in a Virtualized World”. If one thing was made abundantly clear during the discussion, it was the fact that securing a virtual infrastructure is more complicated than securing a physical infrastructure. There are many moving parts to consider along with the hypervisor itself. For many years, I have been discussing the need for automation with my clients. It makes the infrastructure much easier to manage and from a security standpoint it helps to ensure that build policies are consistent for all of the virtual hosts in the infrastructure.
There have always been tools to automate a vSphere infrastructure ranging from Perl scripts to PowerCLI. With the release of vSphere 5 automation is becoming more and more a reality. When you think about automating a VMware infrastructure, you may think about writing scripts to perform certain tasks or spending hours on the “perfect” ESX build that can be deployed through automation. Scripts are still available and in some cases necessary for automation. However, with vSphere 5 we are beginning to see an “automation-friendly” environment built into the management tools that are given to us from VMware.
ESXi: Built for Automation
One of the most important aspects of maintaining a consistent environment starts with the hypervisor deployment itself. ESXi is the official hypervisor that will be deployed in vSphere environments moving forward. It has come a long way since Virtual Infrastructure 3. vSphere 4.1 saw the release of official LDAP authentication integration. This means that you can authenticate to your ESXi hosts using Active Directory. The vSphere CLI and vMA have many more commands available now. Also, PowerCLI is more feature rich with more cmdlets than ever before. Probably the most significant aspect of ESXi that makes it built for automation is its footprint on disk. Since ESXi only takes up a few hundred Megabytes on disk, it is easy to deploy from the network. While that would make it possible to install a common ESXi image across the infrastructure, vSphere 5 takes this one step further.
vSphere 5 Auto Deploy
Auto Deploy is a new deployment method in vSphere 5. This deployment method allows you to PXE-boot the ESXi hosts from the network, load a common image, and apply configuration settings after the image is loaded via vCenter Host Profiles. The idea is to maintain a consistent deployment throughout the infrastructure by eliminating human error. With this method, ESXi has literally zero disk footprint as the image is loaded into the memory of the host. The hosts become truly stateless and are decoupled from any underlying storage dependency. In other words, the host images are disposable. The hypervisor becomes just another part of the infrastructure, disappearing into the background like it should. After all, the virtual machines themselves run the applications. They are the real stars of the show. The system administrators should not have to think about maintaining the hypervisor itself. Let the infrastructure work for you instead of you working for the infrastructure.
Consistency is the key to any stable, secure infrastructure. An infrastructure component as important as the hypervisor should have a consistent, repeatable deployment that introduces as little human intervention as possible. vSphere 5 Auto Deploy makes this possible. You still have to do the work up front to ensure the hypervisor image is built properly. After that, you can let the hypervisor fade into the background and do what it does best. It can provide the best platform for running the applications that run your business.
We all get in a hurry. When we get in a hurry we make mistakes. The following scenario has been played out plenty of times in a virtual infrastructure.
- VM Administrator gets a request for a new VM to be deployed ASAP, which usually means yesterday.
- VM Administrator looks through multiple datastores to determine a datastore with a sufficient amount of capacity.
- VM Administrator picks the datastore and deploys the VM.
What if this particular VM was a database server and the log volume needed to be provisioned on a RAID1/10 datastore. Hopefully the datastores are named with the RAID level in the naming convention. But, what if they are not? Even if they are, it can be very tedious to wade through multiple datastores to find an appropriate datastore that meets both capacity and performance requirements. What if there was a way to “tag” certain datastores with characteristics that are meaningful to the VM administrator? That’s where the new “Profile-Driven Storage” feature comes in with vSphere 5.
Profile-Driven Storage allows user-defined “tags” and automated storage property discovery through the vStorage API’s for Array Awareness (VASA). Let’s take a look at the user-defined “tags” first.
User-Defined Storage Profiles
Very simply, the user-defined “tags” allow one to “tag” a datastore with meaningful text. In the example above, we could define an appropriate datastore as “RAID 1” in the datastore’s storage profile. Then, when the VM administrator provisions the VM, he or she simply selects the “RAID 1” storage profile as being applicable to the VM that is being provisioned. This ensures that the VM will be placed on an appropriate datastore because only those datastores that fit the “RAID 1” storage profile will be available as choices during the provisioning process. If more than one virtual hard drive will be in the VM then multiple storage profiles can be used. For example, you could use a “RAID 1” profile for one virtual disk and a “RAID 5” profile for another virtual disk.” The storage profiles ensure compliance and make it easier for the VM administrator to provision a new VM without human error.
VASA Storage Profiles
Arrays that can take advantage of VASA can provide storage characteristics for the VM administrator. Examples might be RAID Level, Deduplication, Replication, etc. One of these characteristics can be assigned by the system to the storage profile. This further eliminates human error and helps to ensure compliance during and after provisioning.
As you can see, Profile-Driven Storage ensures that VM’s get provisioned correctly the first time. No need to Storage vMotion the virtual machines around after the fact unless their storage requirements need to change. The above is a very simple example of what can be done with Profile-Driven Storage in vSphere 5. Profile-Driven Storage is flexible enough to fit many different use cases. It’s up to you VM admins out there to fit it to your particular use case.
TBL Networks’ Solutions Engineer Sean Crookston recently attained the title of VMware Certified Advanced Professional in Datacenter Administration (VCAP-DCA). Sean is only the 47th person worldwide to achieve this elite virtualization certification.
The VMware Certified Advanced Professional 4 – Datacenter Administration (VCAP4-DCA) is designed for Administrators, Consultants and Technical Support Engineers capable of working with large and more complex virtualized environments and can demonstrate technical leadership with VMware vSphere technologies.
Sean put together many hours of study and research to reach this achievement. Sean has documented much of this work on his website – www.seancrookston.com.
Congratulations to Sean Crookston, VCAP-DCA #47.
Follow Sean Crookston on Twitter at www.twitter.com/seancrookston
Follow TBL Networks on Twitter at www.twitter.com/tblnetworks