Don't Let Your Legacy Be Your Legacy
The logical place to start a converged RISC/x86 infrastructure initiative is with the easy stuff: those applications and utilities that are of relatively recent origin and can be migrated without much drama to a virtualized environment. Thats about as far as many IT departments ever get.
Doing the easy stuff first is acceptable procedure and entirely doable in a converged infrastructure environment. Unfortunately, although doing the easy stuff produces decent returns, doing only the easy stuff leaves a lot of rewards (and risk) on the table.
All the other stuff falls into the general category of legacy and includes applications that typically perform an important, if limited function, have been around for years, and were written in some vanishing programming language by an employee or consultant no one remembers; no one knows where the source code is, and there never was any real documentation.
This is all the hard stuff that typically causes the most complaints, hides a lot of soft costs for support, intermittently stops working and generally produces a ton of suffering for the IT department. It would seem logical to address these legacy issues aggressively and resolve them once and for all, but the tendency is to shy away from addressing the things that cause the most upset and anxiety and put them aside for later -- which is, of course, how they got to be legacy items in the first place.
I know a customer running a legacy app on a DEC Alpha that is approaching two decades of service. The only support for this living dinosaur is a little cabinet full of spare parts that were picked up cheap on some auction site for DEC relics. It would be a funny story, except that when that dinosaur finally fails, not only does the legacy app it runs stop working, but the customers business stops working, too.
Most legacy systems were created when every application ran on in its own server and, in most cases, it is possible to use some middleware to allow it to contribute its data stream to the appropriate pool of compute resources in a virtualized environment.
Later always comes
Every plan to implement a converged infrastructure needs to not only identify all the legacy systems, but also put in place a strategy to provide whatever functionality they currently perform in some updated application that can be welcomed into your expanding virtualized environment as a productive member. The right way to look at legacy apps is: "We know this is an area where we are exposed, lets put a plan in place to re-write it, or duplicate the functionality in some new system."
Ironically, most of the old Cobol, Fortran and custom code legacy applications and utilities that IT departments are afraid to confront can be built much more rapidly today than when they were first created. Java application toolsets like SpringSource produce applications that are made to live natively in a hypervisor, have hooks built in to provide self-provisioning, are cloud ready, and are able to take advantage of other features of the converged infrastructure environment.
Im not advocating taking everything that works and makes you money and make it new just because you can. But you do need to address those things that have a roadmap that doesnt lead anywhere.
One easy way to replace legacy basic utility apps that are not part of your core competence is to offer the source code up to the open source community and let someone else build and enhance it. If other people find it useful, theyll make use of it. "What if it's a competitor?" you ask. Who cares? There is no competitive advantage in maintaining a piece of utility code. Sometimes all you need to do is let one of your application development managers run the community project and it takes off like wild fire.
There are also a variety of ways to stair step an application into a converged infrastructure. For example, you can take some of those old Unix apps and migrate them to Linux under VMware on an X86 platform.
The ideal goal, of course, is to get everything into the converged infrastructure where you have lowest costs per port, the lowest cost for SAN storage, for compute resource for everything, compared to a discrete physical environment, and you can take advantage of the dynamic movement, self provisioning, scaling and all the benefits of shared resources model.
Hoping your legacy apps keep running just a little longer is not an effective IT strategy. The time to address your legacy systems and establish realistic milestones to replace them is before they cause another problem. Dont let your legacy apps hold you back from reaching the converged infrastructure that should be in your future. You can take legacy systems with you, at least part way, but, when you really think about it: Why would you want to?
The next article in this series will help you figure out a system for sorting the chaff from the wheat when it comes to losing the tech trash in your data center.
Other articles in the series:
Why Cant We all Just Get Along? (Part 1/6)
How to Sell the CEO on Change (Part 2/6)
It Always Comes Down to People (Part 3/6)
How to Sell the CEO on Change (Part 2/6)
It Always Comes Down to People (Part 3/6)Jeff Nessen is director, Platform Virtualization, at Logicalis, an international provider of integrated information and communications technology solutions and services, where he is responsible the development of strategic virtualization solutions and the management of a team of virtualization architects. Mr. Nessen has been with Logicalis for five years. Before joining Logicalis he spent 15 years in a variety of IT and IS roles with small businesses to Fortune 500 companies.