"We expect that the Grid computing market will grow from US $180 million today to US $4.1 Billion by 2005," concludes "The Global Grid Computing Report 2002: Technology and Market Assessment" by Grid Technology Partners, a research and consulting firm.
The report, authored by Ahmar Abbas, an electrical engineer who worked at UUNET and ONI Systems before launching Grid Technology Partners, and Nabeela Khatak, said that fewer than 1% of companies are using Grid computing today, a number they expect to grow to 10% by 2005. Large companies (greater than 10,000 employees) will have a penetration rate of 40% by 2005, the authors wrote.
The report also examines the Open Grid Services Architecture efforts to merge Grid computing with Web services.
"There is still a lot of work that needs to be done" on the Grid Service Specification at the core of OGSA, the report said. "Whether OGSA is an IBM-driven marketing push to counter Microsoft's .NET initiative, or whether it is a serious contender that will be heartily accepted by enterprises, remains to be seen. There is, however, great optimism that OGSA will facilitate adoption of Grid technologies for traditional IT applications in addition to R&D applications because it is based on standard Web services standards. Almost all the major Grid technologies vendors have signed on to support OGSA and there has been no competing effort put forth at the Global Grid Forum," the standards-setting body.
Could There Be A GridNet?
The report examines the question of whether there could be a "GridNet" similar to the Internet.
"There is an uncanny similarity between the activities taking place in the Grid computing community, especially in the academic and research realms, and the activities that led to the creation of the Internet," they wrote. "The protocols are the same (IP), work (scientific, R&D) and funding (government, public sector) are similar. The logical question to ask is whether there will be such a thing as the GridNet that is independent from the Internet infrastructure."
They suggested the National Science Foundation-funded Distributed Terascale Facility (TeraGrid) as a possibility for a GridNet. The funding for the TeraGrid project also includes funding for creating a separate high-speed network that connects participating institutions in Chicago to those in Southern California, the report said. There are already plans for additional institutions to join the TeraGrid, such as the Pittsburgh Supercomputing Center, they said.
"We expect that over the next 12-18 months, there will be at least a handful of companies wanting to avail the compute and data resources of the TeraGrid and will also join the network," they wrote. "If the roster of companies continues to grow, then there is a good chance that the TeraGrid could be privatized, the same way as the NSFNET was, to support commercial traffic. This could ultimately lead to the creation of the GridNet, which would operate alongside the Internet."
Applications Key To Grid's Success
The success of Grid computing will ultimately be determined not by "the technical sophistication of Grid computing protocols, nor by the elegance of Grid networks, but by what problems Grid computing will solve," the report said.
Not all applications will be able to take advantage of Grid computing, they said. "The grade of parallel efficiency exhibited by an application determines its suitability for Grid computing deployment," they wrote. "Parallelism is an inherent property of the application."
"The amount of data being used by each concurrent task, assuming that the application exhibits parallelism, will determine whether a particular application is suited for being deployed on a desktop Grid, high performance Grid, or cluster Grid," the authors wrote.
In some cases, desktops may not have the amount of memory required for a particular task, or the network bandwidth requirements may interfere with normal business operations. "In each of these cases, applications can then be run on a high-performance Grid that is either in-house or available through a utility computing model," they said.
The cost and ease of porting applications to a Grid environment will be one of the key factors that will determine the success of Grid deployments in enterprises, the report said. The leading Grid computing vendors, such as Platform Computing, Avaki, DataSynapse, Entropia and United Devices, offer various ways to port existing applications, they said. Entropia, for example, allows binary level integration of applications in its recently released DCGrid 5.0 product. Binary level integration allows any native 32 bit Windows application to be integrated without modifying application source code, they said. United Devices provides a software development toolkit (SDK) for its Metaprocessor 3.0 application.
Business Process Applications Discussed
The authors also discussed Grid-enabling business process applications, including transaction-oriented services such as various Oracle applications, BEA Weblogic, IBM Websphere, ERP, CRM and Sales Automation.
"Apart from IBM's commitment to have Websphere natively Grid-enabled by the end of Q3 2002, there has not been much activity from other key vendors," the report said. "We have also not seen any partnerships between these ISVs and existing Grid companies. We believe, however, that many application vendors, especially those that have various Web service initiatives, will come to Grid computing by way of the Open Grid Services Architecture. OGSA essentially recasts the Grid computing architecture model in a Web services framework, one which these companies are familiar with as they adapt their existing software to Web services."
"In the meanwhile, companies such as Terraspring, Racemi and Ejasent have developed products that can bring the benefits of Grid computing to business process applications today," the report said.
Most of these applications are run on separate dedicated servers, they said. The application and servers are typically sized to handle peak load. There may also be dedicated, redundant and backup systems for each of the applications. During the non-peak times the resources remain idle or underutilized.
Ejasent, for example, has created a product suite that allows companies to intelligently build and optimize their data center resources. A snapshot of each application is created and kept on a pool of shared machines and called into action if and when required. If, for example, the demand for a particular application cannot be handled by the primary server, the traffic is redirected to a server in the pool that invokes an instance of that application. This process usually takes less than 5 seconds and is not perceptible to a user. Another unique aspect of the product lies in the fact that it tracks the exact amount of time a particular application was used, which allows enterprises to pay software licensing fees based on minutes of application usage, the report said.
Ejasent's product suite can be deployed by an application service provider and offered as a service to customers. It can also be deployed by enterprises themselves to create a utility Grid for internal customers, which allows intelligent sizing and utilization of systems while maintaining fast response time, the report said.
The 150-page report costs $2,995, and is available from internet.com's AllNetResearch service.