Technologies to Support the 'Revolution'
If I could conduct a poll right now, I would like to ask, "How many of you feel that youre a part of a revolution? Albeit, its something of a quiet revolution, a slow-burning revolution, one thats only a little faster to watch than watching the grass grow (or slower, since it requires years versus a single season), and one thats non-linear in its direction; with some steps forwards, some steps sideways, and some steps backwards.
I suspect that given the current economic climate and the inherently challenging state of being a CIO, most of you feel beleaguered. But feeling beleaguered and being a part of a revolution are more often than not part of the same historical continuum. Revolutions almost never grow out of comfort and they absolutely never grow out of complacence. And the revolution in IT that Im talking about has evolved out of the downturn following the high tech bubble, pressures for compliance and minimizing risk across all fronts, and new business pressures not only to minimize costs, but alsoeven in the currently dismal economic climateto proactively support business competitiveness.
And that revolution has to do with the transition away from an academic model for governing ITone defined entirely by skill groups and professional backgroundstowards a more customer-centric model in which IT is responsible for delivering services to internal and external consumers with maximum business impact at minimum cost.
The new rules of the road still arent well defined because the shape and direction of the road itself still isnt clear. But one of the few things thats dramatically apparent if you believe in this revolutionary shift from one model of IT to another is that understanding all the factors that impact designing, provisioning, managing and optimizing new services need to be made visible, and visible dynamically. They will require real-time or near real-time insights into business impact along with support for historical trending, process automation and portfolio planning.
And yes, there is still no magic bullet to get from your siloed tools to all-of-the-above, but there are truly radical, actually revolutionary trends and technologies. One of these of course is ITIL, which in spite of some of its stuff-shirted, British civil service roots, has evolved to provide one of the more visionary guidelines to help you take the next steps in making the transition from academically-organized cost center to dynamic business enabler.
From a technology perspective, there are a number of game changing technologies, most conspicuously CMDB systems, which of course were formulated through ITIL, and in version 3 have morphed into truly federated entities as Configuration Management Systems (CMS).
But one of the closely related technologies (indeed so closely related that I would argue that without it the CMDB tidal wave would have been significantly slower to form) are what most in the industry call Application Dependency Mapping (ADM) capabilities. These capabilities provide you with dynamic insights into where your application services reside across the infrastructure and how configuration changes are likely to impact them. They are, in effect, the single best bet for getting cohesive insight on how infrastructure changes are likely to impact application performance or, conversely, how and where changes to existing applications are likely to tax the infrastructure.
One of the things most conspicuous about the so called ADM market is that to a large degree it disappeared before it was born. This is thanks to acquisitions. The first victim was Appilog, which was acquired by Mercury in May of 2004. This was several years ahead of the rise in CMDB awareness and the acquisition was greeted with the mild enthusiasm reserved for arcane things that seem important but that most of the world just doesnt understand. Next IBM acquired Collation in November of 2005. But by then, the veil was off the bride and the true face of ADM was visible as a game changing event.
In 2006 and 2007, the acquisitions continued, as Symantec acquired Relicore, EMC acquired nLayers, and CA acquired Cendura. In the meantime, HP acquired Mercury, which meant it had acquired the Appilog technology to complement existing capabilities in ADM from its Peregrine acquisition. BMC has evolved its own ADM capability in-house, and yet even that depends on some flow based awareness of application traffic over the infrastructure that came from an acquisition made in 2002 of the French company, Perform SA. ASG has also evolved some in-house capability for application dependency mapping with a unique focus on application components and application design.
There are a few free-standing ADM companies left; most notably Tideway, mValent and Troux. The fact that these remain independent should in no way be viewed as a slight against them. Tideway, probably the most mainstream of the three, in particular continues to thrive as in independent option.
So, what should you look for in ADM technology to support that dynamic awareness of changing application-to-infrastructure interdependencies? An awareness, by the way, that can in some cases become a spine for your CMDB system investments by providing a common reference point across multiple, federated CMDBs to key in on service-to-infrastructure interrelationships.
The first thing to say is that as singular as the ADM idea may sound, each solution has arisen out of its own roots; some more focused on process, some more focused on lifecycle application management, some more focused on real-time awareness of application interdependencies over the total infrastructure, and some more optimized to monitor systems configuration changes as they may impact applications.
Nonetheless, here is a brief check list of questions to ask based on your particular priorities and objectives:
▫ How real time is it?
▫ What insight can it capture or represent in terms of application-to-application interdependencies, such as middleware or other services such as DNS services?
▫ Is it architected to support SOA and Web services?
▫ Can it support for virtualized environments?
▫ What level of detail does it provide in terms of systems configuration?
▫ How optimized is it to support asset management and lifecycle planning for infrastructure components as they impact critical application services?
▫ How optimized is it to support troubleshooting by capturing service-to-infrastructure interdependencies along with designated owners (who owns the fix)?
▫ What types of process automation does it support, such as best practices for lifecycle application management?
▫ What role-based constituencies are support out of the box through its reports and GUIs?
▫ What level of awareness does it provide regarding the network?
As a corollary, does it support insight into application traffic as it flows over the network so that application volumes can be associated with the need for configuration changes? This will become especially important in dynamic load balancing for VMS across the network.
Is it agent or agentless or both?
What kinds of integrations does it support? Its not even enough to assume that just because you have brand X CMDB and brand X ADM youre set for life. You may have network or systems configuration tools that are other brands and so your ADM solution should be designed to be a good citizen Lego. This gives you freedom of choice to evolve your management portfolio without brand straight-jacketing.
And of course how deploy able is it? Whats the expected time to value for your objectives in your environment?
Dennis Drogseth is vice president of Boulder, Colo.-based Enterprise Management Associates (www.enterprisemanagement.com), an industry research firm focused on IT management. Dennis can reached at firstname.lastname@example.org