So Much for Simplicity

By Drew Robb

(Back to article)

Virtualization is exploding in popularity. Virtual machine (VM) deployments are expected to grow from 540,000 in 2006 to more than 4 million in 2009 according to research house IDC. While the benefits are widely advertised, the complexities have not been comprehensively discussed. As a result, there appears to be a prevailing attitude of complacency in the VM arena. Because you can potentially do so much and the gains are often so spectacular, server administrators might not be taking the same precautions on performance management as before.

More Tech Trends on CIO Update

Simplifying the Vendor Selection Process

UNIX vs. Linux: A Vendor's Perspective

This Isn’t Your Dad’s ITIL

Lessons Learned from Biggest Bank Heist in History

If you want to comment on these or any other articles you see on CIO Update, we'd like to hear from you in our IT Management Forum. Thanks for reading.

- Allen Bernard, Managing Editor.

FREE IT Management Newsletters

“There is a perception that VMware VirtualCenter and basic resource throttling is enough, but this doesn't give the full picture,” said Andi Mann, an analyst at Enterprise Management Associates. “The bundled tools for managing VMs are not enough to guarantee SLAs (service level agreements) based on business performance objective.”


Virtualization, after all, adds another layer into what is already a complex environment. You start with an OS, applications, Web servers, middleware, databases, interfaces, etc., and you add to that a hypervisor layer which is largely deficient in fundamental management tools and capabilities like performance/capacity management. Since it takes only seconds to add a new VM, they tend to proliferate if left unchecked and this creates VM sprawl—an uncontrolled proliferation of virtual machines. The result is you have multiplied the volume of systems you need to manage, increased the depth of management required, and yet have insufficient tools to do so.


According to Mann, the management tools in VirtualCenter and other virtualization platforms do not provide a broad view of performance across multiple hosts and subnets. Nor do they help administrators to understand physical performance issues. They are not really aware enough of applications; let alone the interactions of multiple components in a composite application (with a separate app server, database server and Web server, for example). VM tools also tend to miss the boat with regard to business services and priorities.


“If you do not properly manage performance, you can end up with a single VM overusing or saturating resources in a host,” said Mann. “An overactive application can saturate the channels to the database, using 95% of the network interface, which slows down I/O for all other VMs on the same host.”


But that’s just one scenario. A highly processor-intensive application can saturate the server, using 95% of the CPU. This leaves only five percent for the rest of the applications on the VM. Interestingly, one of the many touted benefits of virtualization, the elimination of under-utilized servers, may actually be one of the consequences of this lack of effective VM management tools.


Under-provisioning, said Mann, tends to happen first, i.e. attempting to squeeze as many workloads as possible onto a single system. “Without accurate performance and capacity tools, under-provisioning is usually the first mistake as administrators and IT managers typically put more VMs on a server than it has resources to deal with,” said Mann. “That leads to over-provisioning as they react by making sure they have spare headroom even for exception cases.”


Virtualization vs. Capacity Planning


The time-honored practice of capacity planning, then, is essential in any virtualized environment. Unfortunately, many incorrectly assume that as virtualization’s popularity increases, capacity management’s value steadily diminishes. The opposite, however, turns out to be the case.


“Despite propaganda to the contrary, capacity planning is more important than it has ever been,” said Jerred Ruble, CEO of TeamQuest Corp. “Technologies such as VMware Distributed Resource Scheduler, utility computing, IBM Workload Manager or grid computing will never eliminate the need for solid capacity planning.”


Such tools provide intelligent dynamic resource allocation, continuously balanced computing capacity, real-time server utilization optimization and automated dynamic reconfiguration. They certainly help manage existing environments, add much needed automation and ensure workloads have appropriate resources. They can also be useful in supplying capacity quickly and easily to meet varying usage requirements. But they don’t tell you what you need, don’t relate well to business goals and don’t help you look into the future.


It’s one thing to add more processing power and memory to optimize a specific application or VM. But the last thing you want are such changes being made automatically and uncontrollably as the ROI many not merit the investment.


That’s where capacity and performance management tools come in from vendors like TeamQuest, CA, and BMC Corp. They monitor existing performance levels and enable IT to model different scenarios to determine what changes should be made to better support VMs. Perhaps more importantly, they relate the cost of proposed changes to the performance benefits of implementing them. By viewing this in advance, IT can then find the sweet spot in terms of cost/benefits and implement accordingly.


Take the case of a large insurance firm that demanded a response time of 1.5 seconds for a new application. Using TeamQuest Model, the capacity planner discovered that this solution would cost $15M. Further modeling revealed that a 3-second response time would reduce the budget to $12M and a 5-second response time would cost $10 million. By providing management with this information, they took a second look at their original specifications, concluding that it was better to accept a short delay than to pay an extra $3M for 1.5 seconds of response.


In addition, capacity planning facilitates the rightsizing of new applications. Ruble tells of an IT service provider implementing a new application where the capacity planner modeled the new app and discovered it would flood the network with log traffic. This data was presented to designers who corrected the bug before it became a problem.


“Capacity planning takes the guesswork out of accommodating future business workloads,” said Ruble. “It also ensures that a virtualized infrastructure is configured optimally to meet required service levels.”


Virtually Lacking


A recent survey by Netuitive, Inc., supports the lack of virtualization management. VMware customers were polled about their ability to manage VMs. 94 percent weren’t confident in the tools they currently use to manage their virtual environments. Respondents cited poor visibility into performance, difficulty in isolating root causes and high administration time as their major complaints.


"A new approach is needed, one that uses sophisticated, real-time analytics to reduce the massive manual effort of managing VM complexity and ultimately creates confidence and restores performance predictability to managing VMs," said Mann. “That requires collecting metrics across virtualization technologies, vendors, and platforms, and across both guests and hosts, correlating them with each other and with physical metrics, and aligning them with application and business policies.”


Capacity and performance tools fulfill many of these needs. In addition, Mann names Netuitive, Hyperic Inc. of San Francisco, InfoVista SA in Paris and eG Innovations Inc. of Iselin, NJ as niche vendors with promising technology in analytics and VM monitoring.