Disaster Recovery Should Plan for WAN

By Jeff Vance

(Back to article)

Any IT manager can tick off a handful of recent events that make disaster recovery (DR) a top IT priority. Hurricanes Katrina and Rita, the Asian tsunami, tornados this spring in Kansas, and, of course, 9/11 all empathically state the case. No need for in-depth ROI studies. Say “Katrina” and the C-level executives will find the funds.

More Tech Trends on CIO Update

Riding the CMDB Tidal Wave, Part One: Understanding

IT: An Industry in Transition

Report: Most Aren't Rushing Into Web 2.0

Should IT Embrace Consumer Technology?

If you want to comment on these or any other articles you see on CIO Update, we'd like to hear from you in our IT Management Forum. Thanks for reading.

- Allen Bernard, Managing Editor.

FREE IT Management Newsletters

However, are these plans as well thought out as they should be? According to Kevin Hoehnbrink, product manager at F5 Networks, a provider of application acceleration equipment, too many DR plans miss the point.

“I try to remind people that disaster recovery is just a component of a larger, ongoing issue—business continuity,” he said.

It doesn’t take an earthquake, hurricane, or terrorist attack to impact business continuity. Applications that don’t perform well, especially across the WAN, is something IT sees every day, and it can cause just as many problems as a disaster.

Worrying about unforeseen disasters is important, but figuring out how to get your applications to overcome latency challenges so they’ll work as well at branch offices as at headquarters is a more pressing challenge.

Bandwidth: The Ongoing Problem

This is where WAN optimization comes into play. It can boost application performance over the WAN and serve as a delivery foundation for DR. Despite its promise, WAN optimization is still a technology that escapes the notice of many IT managers and CIOs.

Although the space is fairly crowded—including incumbents like Cisco, Juniper, and Packeteer who’ve all entered the space through acquisitions and startups such as Expand Networks, F5, Riverbed, and Silver Peak Systems—the technology isn’t as accepted as it should be.

“There definitely needs to be more market awareness,” said Robert Whiteley, a senior analyst with Forrester Research. “Too many CIOs are unaware of new technologies and are still just looking at the network.”

With a network-only viewpoint, capacity planning uses peak traffic as a threshold. To guarantee enough bandwidth, you add capacity for, say, double that of peak times and consider yourself protected.

“When you think about a full-blown disaster recovery situation, capacity needs will be well beyond your peak traffic,” Whiteley said.

WAN optimization helps you to meet your DR capacity needs without expensive, last-minute bandwidth outlays. In the case of a disaster, you can realistically accommodate that new load without adding a new link that will cost you $5,000 a month or so, but only if you have the right technologies in place.

A recent Forrester study on DR and the WAN found that nearly a third of the costs associated with DR plans go directly to bandwidth. Even worse, a majority of those polled in the study said that a lack of bandwidth prevented them from extending backup protection to remote sites. Since fewer and fewer organizations rely on offsite tape vaulting for DR, WAN-related issues become more pressing and more limiting.

In the wake of disaster planning awareness, many organizations are forming enterprise risk management (ERM) groups. As a result, issues like business continuity gain visibility beyond IT. ERM is quick to advocate having a failsafe in place and, for business continuity, that failsafe is technology.

But technology can only do so much. While C-level executives may believe you can simply drop DR/BC equipment into place, it’s not that simple. “Disaster recovery and business continuity are more than anything else about processes and procedures,” Whiteley said. “With certain innovations in technology, you can automate many processes, but it soon becomes obvious that an unreliable communications medium, the WAN, is the weak link that could undermine even the best plans.”

For instance, backup and recovery strategies require data to be replicated offsite, but if the WAN can’t support the traffic, all of that planning falls short of expected goals.

“Making Exchange work at a branch office is one thing. Achieving complete failover and disaster recovery is another,” Whiteley warned.

From Disasters to Every Day Nuisances

When Ken Adams, the IT manager for law firm Miles & Stockbridge, started looking at WAN optimization his goals were modest. “Looking ahead, we knew we needed to update our business continuity and disaster recovery capabilities, but a more pressing need was getting acceptable application performance at our branch offices,” he said.

Located in the mid-Atlantic region, Miles & Stockbridge has approximately 200 lawyers practicing in nine separate offices. According to Adams, there are a lot of separate trends converging that highlight the need to improve WAN performance, such as application centralization, disaster recovery, and limited IT support resources.

As applications migrate away from branch offices to the main office, as part of centralization and virtualization efforts, new problems emerge: Data is kept up to date and equipment is used more efficiently, but applications perform poorly away from the main office.

Miles & Stockbridge chose Silver Peak System’s WAN optimization solution and experienced improvements immediately. With WAN-optimization boxes at each site, Internet connections at one office can serve as failover connections for another. Moreover, since Silver Peak encrypts traffic, Adams said they now have a private, redundant, meshed network over the public Internet.

“I was able to drop all second and third ISP connections in branch offices. The WAN performance has improved to the point where the main office can now be an ISP for the branches.” Adams estimated that Miles & Stockbridge would see ROI within two years based on bandwidth alone.

“Bandwidth savings is the easiest thing to quantify,” admitted Jeff Aaron, director of product marketing for Silver Peak. “But it’s the other pain-points that really make an IT staff look good when they deploy WAN optimization.

The guy who is responsible for an application that performs poorly in a branch office no longer fields angry calls. The CIO responsible for business continuance now has a plan that doesn’t just collect dust on a shelf, and new technologies like virtualization can be made to work once you get the WAN to perform optimally.”

Startups Push the Envelope

Getting better performance can be achieved through a variety of techniques. The incumbent vendors have the advantage in having an installed base to which they can add on optimization. Data deduplication, header compression, and TCP acceleration are all methods for improving performance. None of these are new, but getting the acceleration cocktail to work en-masse is an engineering feat.

What is new is byte caching and this is where the startups in the space excel. Their purpose-built platforms allow them to cache at the lower byte levels, and along with other compression and acceleration techniques, they claim to be able to reduce sent traffic by up to 95% with little or no information lost.

In its simplest terms, byte caching looks for repeated data and preserves only one iteration. Indexes (or tokens) are created that represent the larger block of data, and traffic is reduced dramatically. “Byte caching is the best weapon the startups have,” said Whiteley.

Incumbents have integrated this technology as well, but an argument can be made that purpose-built systems perform better than legacy equipment with these new techniques bolted on.

Hype Cycle?

Is this technique new, though, or is this vendor hype? Byte caching sounds an awful lot like what was going on several years ago when extending enterprise applications to mobile devices was all the rage (a trend that has yet to pan out).

“Granted, this isn’t exactly novel,” Whiteley said. “Whether it’s a mobile applications or disaster recovery, the commonality is finding a way to improve an underperforming link.”

Where WAN optimization trumped mobile enterprise applications is that an industry segment, namely storage, had an immediate problem. “Bandwidth is still doubling every year, but so are storage capacity needs,” Whiteley noted.

Storage started with data de-duplication, which reduced the need to store redundant data. Then WAN optimization companies took the next step, creating a system that applies not only to data at rest, but also to data in motion. Now, coming full circle, many WAN optimization vendors are developing software clients that can be deployed on any device, mobile or fixed.