Virtualization is all the rage these days. VMwares IPO last summer netted the company nearly $1 billionthe biggest IPO since Google.
Fast forward a mere five and a half months, and VMwares stock plunged, leading to such headlines as VMware Smashed, The Partys over at VMware, and VMware A Wall Street Chainsaw Massacre.
What changed between August 2007 and January 2008? Not much, truth be told. The market in general was down and VMware did miss its projected Q4 revenue mark. Yet, revenues were still upway upover Q4 2006. So, what was all the fuss about? I dont claim to be a stock analyst, but I believe one of the variables that hurt VMware, and virtualization in general, is the technology is being over-hyped. It will help usher in green IT. It enables disaster recovery and business continuity. It hardens security. It reduces operating costs.
All true, but as any IT pro knows from hard-earned experience, the adoption of new technologies always comes with growing pains. Hyping virtualization as a silver-bullet, plug-and-play technology is false advertising. Successful virtualization projects are vastly more complicated than vendors admit, and the risks associated with a poorly implemented effort are serious.
Risk and New Technology
What, then, are the risks? Security is an issue, said Gary Chen, senior analyst,
Most analysts agree and believe that security, while not something to ignore, wont be a huge issue. The real issue is performance and management. With virtualization, performance takes a hit, Chen said. This will improve over time. Hardware is adapting. Operating systems are becoming virtualization aware, but issues like I/O and application compatibility are real problems.
A corresponding problem is that many of these performance issues are hard to pinpoint. From an end user perspective, why is the application underperforming? Its a mystery. End users just know that its not on par with what it used to be. Of course, end users arent expected to figure these things out. They have IT for that.
But what if IT cant figure it out either? Todays virtualization monitoring solutions are blunt tools that can miss key performance variables. Incompatible applications may reside side by side on the same server. Applications may have synchronized traffic peaks that are missed, resulting in micro-saturation. Yet, diagnostic tools will show nothing.
Without better performance monitoring, well all be nostalgic for the traditional approach of over-provisioning and dedicating single servers to single applications. Moreover, if virtual environments arent properly planned, a single server crash could take down multiple business-critical applications at once.
You have to plan on an application-by-application basis, said Richard Jones, VP and service director, Data Center Strategies, for the Burton Group. Some applications arent ready, such as Oracle databases.
In fact, any I/O-intensive application tends to be problematic.
The key word in the performance/reliability discussion, then, is planning. Dont just plan from the perspective of the OS or hardware, as we did in the past. Plan from the perspective of the application or service you intend to deliver, Jones added.