Dynamic Computing's Benefits and Pitfalls

By Gavin Williams

(Back to article)

What is dynamic computing? Dynamic computing is an approach that combines a series of best practices along with smart technology, helping enterprises be more agile by automatically scaling up or down, depending on the real-time business needs.

As the enterprise changes, dynamic computing helps enterprises bring new capabilities online more quickly and in a more cost-effective way. Thus, companies are able to run very tight and lean with technology that adapts to their needs.

To illustrate how dynamic computing can benefit a company, here are some scenarios we encountered with a large customer in the financial services industry:

1. Expecting the Unexpected: Dynamic computing allows companies to react to sudden spikes in demand rapidly and efficiently in an automated fashion, ensuring consistent availability of critical business processes. With the ability to ‘expect the unexpected’, the dynamic computing systems of a global bank were able to detect a surge in demand instantly. Rather than failing the service level agreement, the company was able to automatically bring additional capacity online to cope with the demand. They were also able to track for subsequent actions like billing.

This bank’s dynamic computing gives it the business agility it needs to meet customer needs in high-demand situations and stay on top of the market. As demand tails off the additional capacity is automatically reduced back to normal levels.

2. One-Click Scaling: A division of the investment banking department in the Tokyo branch of this bank planned to expand a pilot program to other offices on a global scale. Dynamic computing made this complicated global roll-out much more straightforward and simple. First the bank had developed a process for new functionalities to be rolled out to other offices and locations.

Once the IT department got approval for the roll-out, the bank’s automated IT systems kick in and moved the functionality for this solution to the global datacenters seamlessly without downtime for any existing users. The dynamic computing system allowed the organization to rapidly scale and grow its capability in response to new business requirements.

3. Retiring Old Applications: As new applications come on-line, the bank continually assesses and retires old services that are no longer filling strategic roles. For example, an aging collaboration platform for employees at the bank was on its way out, to be replaced by a newer version that offers more business benefits. Usage of the older platform had been dropping off, but not quite fast enough to hit the planned decommission date.

Rather than switching the functionality off all at once, the collaboration platform is kept operational but steps are put in place to accelerate its decommissioning. Services are automatically scaled down and moved to a lower SLA with higher cost for support, all without interfering with any live transactions running on the application. When the last user logs off the old system, the Business Collaboration Manager sends an email to the IT department confirming that it is no longer needed. The servers were already operating as virtual servers, so they are simply mothballed in storage in case of future need. The bank’s dynamic computing covers the entire application lifecycle, from development through production, and ultimately to easy decommission.

5. Enabling Innovation: Dynamic computing can also help IT departments shift from a ‘lights on’ operation to a proactive, forward-looking approach. For example, the bank implemented a high-performing computing environment. In doing so, it has been able to turn most routine business demands into structured policy-supported IT operations – it anticipates customer and employee demands and responds according to predefined policies. Having reviewed the activities of the day, the IT department considers how newer technologies can be applied to accelerate and improve the bank’s offerings in the market. This moves the bank’s infrastructure from an IT cost center and to a strategic business enabler.While dynamic computing offers significant benefits, a cautious company will be careful to avoid some of the pitfalls along the way.

1. Drinking Too Much Cool Aid: All too often, CIOs become enthralled with the promise of dynamic computing and commit too many resources to implementing it where there is no concrete business case. The business needs should drive technology decisions – not the other way around. Dynamic computing takes advantage of economies of scale and requires upfront investments that become a fixed cost for management, monitoring and detection.

If a company is only making a handful of changes to its environment each year, dynamic computing may not be the best approach. For example, there is no point in creating a multi-tier automated provisioning system if the business plans do not show a forecast of multiple changes and new services requiring it. On the other hand, if your environment requires a high level of flexibility and scalability because your technology needs are continually changing, a dynamic computing strategy is worth investigating.

2. Can’t See the Forest for the Trees: Often companies get so bogged down with technology and logistical details that during the time they spend developing a large-scale dynamic computing environment, they fail to see that their solution no longer addresses changing business needs.

Companies should be wary about becoming too granular and prescriptive in their dynamic computing solutions. The purpose of an efficient computing environment is to minimize change and the costs involved.

Despite the automation that allows you to provision and de-provision, or move virtual machines from one host to another, do not use up the cost and time benefits achieved through constant minor tweaking with marginal additional benefit. If you’re constantly tweaking your capacity and trying to follow demand exactly, you will lose sight of the larger issues. Worse, you will divert important resources that should be focused on managing and adapting to change. For example, sometimes the highest level of availability can be achieved by simply finding a configuration that works, and leaving it alone for as long as possible.

3. Time is of the Essence: Many projects as sweeping and significant as dynamic computing can collapse under their own weight, layers on dependencies and benefits and payback many years out. By the time a monolithic IT project is implemented, the business often changes so much that the benefits of the original project are no longer relevant.

We saw this in the 1990s when many customers implemented business processes that became so large and unwieldy that they became unmanageable. Companies need to keep in mind that timeliness is key to successful dynamic computing initiatives, and that solutions can become dated if they take too long to deliver. Companies today need to be able to see results within the first year, so planning the implementation is key.

4. Going it Alone: Implementing a dynamic computing solution impacts many areas of IT. Many companies feel they can go it alone and begin implementing a dynamic computing solution based on their own internal assessment. However, talented people with real-world dynamic computing experience are rare. If a company does have the internal expertise, it often lacks the objectivity necessary to evaluate and change its own environment. Third-party experts who can provide unbiased guidance can be crucial factors in planning and implementing innovative dynamic computing solutions.

Gavin Williams is director of Infrastructure and Security Solutions at IT consultancy Avanade.