Utility computing is getting a lot of attention for its potential to help companies cut IT expenditures while making
their networks more flexible and scalable.
Then again, IT professionals have heard such claims before.
The idea behind utility computing is this: users buy processing power and related services on an as-needed basis, similar to electricity. For example, when network traffic gets too heavy in a company's data center, intuitive architecture could trigger other resources into action, including idle servers, applications or pools of network storage. The company would pay only for the amount of time it uses the services.
During the late 1990s, service providers for discrete technologies cropped up boasting similar claims. The landscape is littered with failed storage service providers and application service providers -- two types of companies that provide rented services on demand.
Buying computing power like you buy electricity sounds as fanciful as the land of Oz. Utility computing principles have had limited adoption to date, although more businesses are recognizing the potential benefits, says Bill Martorelli, an analyst with Giga Information Group in Cambridge, Mass. Companies offering Web services are especially interested, he says.
"Let's say you have a company that has to maintain a Web infrastructure," Martorelli said. "You have certain peaks or unpredictable demand on that infrastructure throughout the day. Wouldn't it be nice to have the ability to access extra capacity when you needed it, instead of buying the infrastructure (outright) and have it sitting there all the time doing nothing?"
Accessing resources in such a dynamic fashion means companies could reduce huge up-front IT expenditures and "be able to buy services on a variable-price basis," he said.
The traditional IT architecture leads to "over-provisioning" and unused resources, says Galen Schreck, an analyst with Forrester Research in Cambridge, Mass. Enterprises have had to engineer their computing environments to handle spikes in demand, even if they occur only occasionally. "It's pretty much an application per server in the data center. Companies don't usually run applications concurrently because of fears they will crash," Schreck said.
A company utilizing 20% of its server capacity probably would be ecstatic about how its systems are being used, he said. In reality, however, that would mean 80% of its resources aren't being utilized -- although the servers have been paid for, installed and maintained at considerable cost. Being able to automate the allocation of dormant computing resources could raise utilization rates across the enterprise while lowering costs, Schreck said.
Some studies have placed resource utilization in enterprises as low as 5%. To that end, a host of software companies are scrimmaging for market share. They include VMware Inc. of Palo Alto, Calif.; Egenera Inc. of Marlborough, Mass.; Opsware Inc. of Sunnyvale, Calif.; Think Dynamics Inc. of Toronto, and others. Two other companies, Terraspring Inc. of Fremont, Calif., and Connectix Corp. of San Mateo, Calif., were recently purchased by Sun Microsystems Inc. and Microsoft Corp., respectively.
Utility computing sometimes is used interchangeably with grid computing. Because of this trend, it's not unusual for the buying of computer power to be compared to the buying of electricity or telecommunications bandwidth, which also use grids. Although grid computing is being used in some organizations, Schreck notes that its application is limited mainly to the scientific and academic communities. He says that grid computing connects a large number of disparate processors across geographic locations, whereas true utility computing can be done within an individual enterprise.
Hewlett-Packard Co. last year began offering its HP Utility Data Center product, in an effort to accelerate commercial use of grid computing. HP isn't alone in staking out a piece of the utility computing/grid market. IBM Corp.'s eLiza project, for example, focuses on building autonomic computers that can recover from disruptions without human intervention. And Sun continues to tout its N1 portfolio, a heterogeneous architecture that provisions computing, storage and network resources based on the demand for services.
Utility computing also could play a role in server consolidations, according to a study by Stamford, Conn.-based Gartner Inc. The firm predicts that, by 2006, hardware partitioning on Unix and Windows platforms will yield to a combination of logical and software partitioning, which will become the "dominant partitioning for server consolidation capability."
To date, utility computing principles are being applied mostly by larger companies, such as big financial services institutions. American Express Co. last year inked a $4 billion, seven-year deal for on-demand IT services with IBM Global Services. IBM also is providing a range of Web-hosting services for Dow Chemical Co.'s e-business sites.
Since the technology behind utility computing is still emerging, Martorelli said, it can be difficult for small and midsized companies to assess return on investment. But they should be paying attention to the technology's development. "Certainly, there is a very high potential for business value in these kinds or products and services but, like any other technology, people have to evaluate it based on the benefits."
>> SearchWebServices.com guest column: The time for utility-based computing is now
>> SearchCIO.com Best Web Links: Outsourcing options