Tips on Disk-to-Disk Backup, Part IIIcolumn I explained the concept behind disk-to-disk backups. Simply put, disk-to-disk backup is when the primary backup is written to disk instead of tape. That backup can then be copied, cloned or migrated to tape at a later time or just left on disk.
A very simple concept that, implemented correctly, can greatly affect a companys ability to reliably backup and recover data. The column also explained how there are essentially three ways to implement a disk to disk to tape backup: from the host, from the SAN (in the form of an appliance) or within a tape device. In last month's column I discussed the host-based solutions. This column will take a closer look at the appliance-based option.
Virtual Tape Library
Backup solutions usually consist of three pieces: backup server, backup client and a tape device. With the host-based solution, the backup server needs extra software (to write to disk instead of tape) and a large pool of disk-storage. With the appliance-based solution, a server is added and is either bundled with disk, or a disk must be purchased separately.
The backup software remains the same. An appliance-based solution is generically called a virtual tape library (VTL). The VTL can emulate many different tape libraries that the backup software recognizes and can be integrated into the existing infrastructure seamlessly.
Virtual tape is a concept that has been in the data center for many years. Originally introduced for IBM mainframes, it is now exploding in the open systems arena. VTLs are logically just like physical tape libraries: they logically appear and operate as physical tape devices including virtual tape drives, data cartridges, tape slots, barcode labels and robotic arms.
A VTL is physically a highly-intelligent optimized disk based storage appliance. Because a VTL completely emulates a standard library, the introduction of virtual tape is seamless and transparent to existing tape backup/recovery applications.
Traditional tape devices have a few problems: they are slow and the only way to solve that problem is to add more and more tape drives. Tape library robotics are prone to failure and the tape media itself is delicate and must be stored in a conditioned, secure environment.
To increase backup performance, backups can be multiplexed across multiple drives and tapes. This increases the odds of a failed backup due to a bad tape, a faulty drive or malfunctioning robotics.
* (See story correction below.)
Restores from tape are also time-consuming. Consider trying to recover a file that was part of a 5-tape multiplexed backup. Each of the five tapes must be located in the library and loaded into tape drives. If the drives have tapes in them already, the tapes must be removed from the drives and moved to free slots before another tape may be loaded. Once the tapes are loaded, they must be advanced to where the file is and then, finally, the file can be read from tape. It can take many minutes just to start the recovery. If the tapes are not in the library, it can take many hours to recover a single file.
On the plus side, tape-based solutions are usually considered to be relatively inexpensive. But when the tape media is considered, the cost can skyrocket.
Some studies show that users will buy thirty times the amount of slots worth of tapes during the life of a library. For a medium-sized, 100-slot library, thats 3000 tapes. LTO-3 tapes are currently in the $100 price range. Add the fact that extensive human intervention is required to manage and maintain a tape solution and they are not very inexpensive. It can cost a lot of money to be able to successfully backup your data 60% of the time.
Comparing the problems associated with tape solutions with a virtual tape solution shows how a VTL can change how a datacenter can be run.
A virtual tape library solution can perform 10-times tape speeds for backups. Speeding up the backups will greatly shrink the backup window which allows servers to be backed up faster. With existing backups finishing quicker, second and third tier servers that have not been backed up in the past may now fit into the backup schedule.
Recoveries are also significantly faster using a VTL (typically much faster than the backup). A single file can be recovered from a VTL faster than most tape libraries can find and load a tape into a drive. Full backups that span multiple tapes (as apposed to multiplexed) will also recover very slowly compared to a VTL since after the data is read from one tape the next tape must be located and loaded compared to a VTL that just keeps streaming data from disk.
All virtual tape loads are immediate so there is virtually no delay when a new tape is loaded. People who resort to multiplexing backups to increase the performance of the backup are usually shocked to discover that their recovery times will be about twice as long as the backup was.
Most virtual tape library solutions also contain RAID protected storage that has redundant, hot-swap components (drives, power, cooling). Backups that use a VTL rarely fail because of a VTL failure. Recoveries will never fail due to a bad or lost tape.
VTLs are just not prone to the types of failures that a traditional tape library has. For example, a backup to disk will never fail because of a bad tape, broken tape drive or broken robotics.
The initial cost of a tape library can be less than the cost of a VTL. But when a three or five year cost-of-ownership is considered (tape media, failed backups, lost data (due to failed recoveries), management costs, etc.) a VTL will be less expensive.
Also consider the lower cost of backup software. Some backup software is tiered based on the number of tape slots. By configuring a virtual library to have few slots, but very large tapes, the software tier can be lowered. For backup software that is tiered on the number of tape drives, configure a virtual library with fewer drives. Some backup software solutions are now adding a virtual tape library option which is priced based on the capacity of the library.
Todays virtual tape libraries range from a customer supplied server with VTL software and separate disk to a completely productized solution where the server, software and disk are all bundled.
There are pros and cons for both extremes.
With an unbundled solution, the user gets to purchase each piece separately. The pieces include the VTL software, server, disk and potentially the SAN infrastructure. Unfortunately, the user must also purchase separate support agreements for the VTL software, server, disk and SAN infrastructure and each piece must be managed and monitored by the IT staff.
With a bundled solution, all the pieces are included, tightly integrated and guaranteed to work together. The solution is managed and monitored as one entity and support is covered by one contract.
With current bundled VTL solutions expanding to a petabyte or more, scaling the solutions is not an issue. Adding an additional VTLs for each petabyte of backup is acceptable for most environments. The only negative with a bundled solution is that it is bundled. Some people just do not like that. When it comes to backup, it is best to keep things very simple so a bundled VTL solution is probably the best bet (there are not very many home-made tape libraries in production so why make your own VTL solution?).
Adding a virtual tape library into an existing tape environment will always improve the reliability of backups and recoveries. Even if a VTL is configured to backup only as fast as an existing library, the increase in performance for day-to-day data recovery makes the investment a no-brainer.
Add the robust data processing capabilities not available in physical tape libraries (ex. replication, single-instance data storage) and a VTL can open the door for tremendous advances in tape backup methodology and revolutionize traditional operations.
If you are not considering a VTL today, you should tomorrow.
Jim McKinstry is senior systems engineer with Engenio Information Technologies, an OEM of storage solutions for IBM, TeraData, Sun and others.
* The following paragraph, which was orginially part of the article, was wrongly attributed to the analyst firm Gartner. Gartner said they never published these numbers: "The analyst firm, Gartner, has reported that almost 50% of all backups are not recoverable in full, and that approximately 60% of all backups fail in general. These failures are mostly associated with tape, drive or robotic failures."