Leveraging Existing IT Assets: A Fiscal Fitness Plan

By Drew Robb

(Back to article)

The recession is officially over. At least that is what the economists say. But that doesn't mean IT budgets suddenly will open up.

According to a recent report by Giga Information Group, "The great majority of enterprises are and will be focusing on leveraging their existing IT assets more effectively in 2002."

So for now, it remains a case of making do with what you have, while figuring out inexpensive ways to extend the life, performance and functionality of existing resources. To help out, here is a four step fitness plan to economically boost your system's health.

Slim Down

Some companies are finding they can trim costs by slimming down. By using thin-client applications they can simplify software deployment and management, eliminate bandwidth upgrades and gain added life out of workstations. And, when it does come time to replace client hardware, they can do so using solid-state thin-client terminals instead of PCs, thereby reducing capital outlay and maintenance costs.

When using a thin-client application, all processing takes place on the server. All that travels over the network are keyboard/mouse inputs from the client to the server which then sends a screen image back to the client. Users gain fast access to enterprise applications even over slow dial-up connections.

The State of California's Department of General Services (DGS), for example, used thin- client technology to reduce office space. Two thousand employees now telecommute.

"We weren't set up to go into homes to configure workstations or deploy software," says Jamie Mangrum DGS Operations Manager for Enterprises in Sacramento, Calif. "You can't force employees to do things like that."

DGS installed Microsoft Windows Terminal Services and Metaframe software by Citrix Systems, Inc. of Fort Lauderdale, Fla. Metaframe is installed on each application server, translating communications between servers and clients. Telecommuters access a DGS interface on their home computers.

"This approach enables us to do more with the same staff," says Mangrum.

Technology by Tarantella, Inc., based in Santa Cruz, Calif., takes a different approach. Rather than sitting on application servers, its Enterprise 3 software resides on a Linux or UNIX box between the users and the servers. It takes the data from servers or mainframe and coverts it into a web document which it relays to the user. Clients access programs via web browser.

Detroit-based DTE Energy used Enterprise 3 to simplify system integration after acquiring a gas company. "As we migrate them to the DTE site, they use Tarantella to access corporate applications," says John Townsend, Manager of Network Operations, "so we didn't have to move the applications over."

Non-Stop Workout

For enterprises looking for high-powered number crunching capabilities, distributed computing offers supercomputing power using existing desktops and servers. The concept is simple -- take advantage of available processing power. Except for occasional short bursts of activity, desktop CPUs run at fraction of capacity. On nights and weekends, particularly, they do nothing at all.

To get an idea of how much of your computing power lays unused, Grid computing firm Entropia has an online calculator. Type in how many computers you have with what processors and it calculates underutilized capacity. For instance, 1,000 1GHz processors is the equivalent of 88 Silicon Graphics 16-way Origin 2000 servers.

Distributed or Grid computing captures all these unused processor cycles and puts them to good use. The most well-known example is the SETI@Home project, which uses the combined power of home computers to search for signs of extraterrestrial life. The project averages 33 Teraflops/second of processing, nearly three times the speed of the world's fastest supercomputer, IBM's ASCI White.But Grid computing also makes sense for more down-to-earth applications. Intel and Sun Microsystems use Grid computing in chip design. Pratt and Whitney runs computer simulations of turbine engines using Platform Computing's LSF software. Caprion Pharmaceuticals in Montreal uses Sun's GridEngine to speed up the analysis of human biological samples. A four-CPU server takes 720 hours to analyze one sample, but the Grid software lets the company assign up to 76 processors to the job.

"Organizations using it are the same ones that 15 years ago would have been using Cray computers," says Brent Sleeper, principal of The Stencil Group, a San Francisco-based consulting firm specializing in Web services and enterprise software. "By going for a highly distributed approach they are getting much the same power but at a lower cost."

Improving Endurance

Corporations typically utilize a three-year PC refresh cycle. But for the last several years, processor speed has greatly outstripped the needs of most business applications. While a 2.2 GHz Pentium 4 may shave a few milliseconds off processing a letter, that doesn't translate into bottom-line advantage.

As a result, last year Gartner, Inc., of Stamford, Conn., switched from recommending a three-year refresh cycle to a staggered refresh. Those needing additional computing power, such as engineers, high-end graphics or data mining specialists are upgraded more frequently, while most users receive a new machine every four years.

This strategy, however, is flawed unless new machines retain their original capabilities. On Windows operating systems, in particular, workstations and servers suffer steady performance degradation due to disk fragmentation, anywhere from 20% to 200%, according to tests conducted by software testing firm NSTL, based in Conshohocken, Pa. Unless defragged regularly, these machines become sluggish long before they are scheduled for upgrade.

"Some companies, unaware of the impact of fragmentation, are likely to resolve such a performance impact with more expensive acquisitions of higher-performance hardware," says International Data Corp. analyst Steve Widen. "However, it is just a matter of time before fragmentation impacts the new machines because this process only temporarily masks the performance problem."

In addition, the excessive disk I/O caused by fragmentation leads to premature hard drive failure, wiping out the anticipated hardware cost savings. It is therefore imperative to install a networkable disk defragmentation program in order to keep the machines running at their peak and extend their useful life.

Boeing's Space and Communications Division, for example, uses Diskeeper by Executive Software of Burbank, Calif.. "We can see a vast increase in performance using this new tool, especially on cluster servers within our 8 TB storage area network and Citrix Metaframe," said Boeing Systems Administrator Tim McGovern.

As a result of regularly defragmenting all machines, Boeing was able to achieve high performance out of older 450 MHz Dell OptiPlex machines, delaying the need to replace them with higher-powered boxes.

Staying Compact

Storage demands continue to explode with ever-increasing file sizes. Enterprises typically find their needs doubling every year. While disk capacities are also growing, some companies are reducing their server room footprint using "pizza box" and "blade" servers.

When the Air Force's Center for Research Support (CERES) at Schreiver Air Force Base in Colorado needed to boost its satellite simulation capacity, for example, it didn't want to build a larger facility to host additional servers. CERES selected nStor Technologies, Inc.'s (based in San Diego) NexStor 3150 servers, which cram up to eight 73 Gigabyte disks (584 GB total) into a single 3.5-inch-high unit. This means up to 9.1 Terabytes of storage fits into a single rack.

Compaq ProLiant BL blade servers are another alternative. Each server -- including hard disk, CPU and memory -- sits on a card, rather than in its own box. Twenty 30GB servers reside in a single 5.25-inch enclosure sharing power supply, fan and wiring. Up to 280 servers fit into a standard rack.

The blade server architecture not only saves money by eliminating shared components (e.g. one fan and power supply, not 20), it also cuts power requirements by 75%.

This story first appeared on internet.com's Datamation.