In-memory data grids are viewed as one way to handle the velocity, variety and volume of big data. By moving data into memory and distributing it across multiple servers, the approach aims for easier access to data, improved scalability and better data analysis. Early adopters include Internet giants Google, Facebook and Twitter -- but now experts say the technology is starting to go mainstream. Stamford, Conn.-based Gartner Inc. calls in-memory computing one of the top 10 strategic technology trends of 2013.
The rise of in-memory data grids (IMDGs) -- or in-memory distributed caches -- is part of a larger trend in big data technology. Gartner reports that markets aligned to data integration and data quality tools are on an upward swing, set to push IT spending to $38 trillion by 2014.
"What's really driving [adoption of in-memory data grids] is the fact that workloads on Web-based applications are growing very quickly," said Dave Brinker, chief operating officer of Bellevue, Wash.-based ScaleOut Software, a scalable data grid software company. "Traffic gets so heavy that bottlenecks occur in the processing of Web applications."
IMDGs can help by caching data in memory, instead of storing it in the traditional database. This enables faster response times and avoids data "traffic jams."
Increasingly, software vendors are embedding in-memory data grid technologies as enablers for their packaged applications.
Massimo Pezzini, vice president and fellow, Gartner Inc.
The main use case for IMDGs is to support Internet-scale Web and mobile applications, said Mike Gualtieri, principal analyst at Cambridge, Mass.-based Forrester Research Inc. "Many large enterprises have been using [in-memory distributed caching] for years -- such as Bank of America," he said. "When users connect to a site or service, they increasingly expect blazing fast performance."
Supporting high performance -- along with high scalability -- is a specialty of IMDGs, according to Massimo Pezzini, vice president and fellow at Gartner.
"For high performance, having the data in memory allows for accessing data much faster than is possible with a regular database," he said. "If you have a scalability issue … by distributing your data across a large number of servers, you can deploy an application to support tens of thousands, hundreds of thousands, or even millions of users."
The ability to improve performance and scalability has made IMDGs especially popular with e-commerce, Software as a Service (SaaS) and financial services organizations. Successful SaaS platforms have many thousands of users and therefore require highly scalable architecture, explained Pezzini. On the other hand, e-commerce vendors or providers look to IMDGs for their performance perks: The faster an e-commerce website runs, the more revenue it generates. And more financial services firms are using IMDGs for fraud detection, analyzing reams of historical data in real time to track the behavior of credit card users.
Still, Pezzini stops short of calling IMDGs a mainstream technology -- for now.
"As a standalone product, [the IMDG] will take another three or four years to become mainstream -- but as an embedded component, it's already happening," he said. "Increasingly, software vendors are embedding in-memory data grid technologies as enablers for their packaged applications and other forms of software, like ESB technologies, BPM tools and application servers."
Some of today's largest middleware vendors offer data-grid enabled software. By way of example, Oracle Coherence -- a Java-based in-memory data grid -- is embedded in both Oracle SOA Suite and Oracle BPM Suite. Likewise, IBM's WebSphere eXtreme Scale provides distributed caching for use in environments such as WebSphere Commerce Server. Similar offerings are available from Software AG (Terracotta's BigMemory) and Tibco Software (ActiveSpaces).
The IMDG market may be relatively small -- taking in just $260 million in 2011 -- but Pezzini noted that some data grid vendors doubled their revenues in 2012, while others quadrupled them. He predicts the market will hit the billion-dollar mark by 2016 or 2017.
Prime drivers, challenges of IMDGs
As in-memory data grids gain traction, challenges are revealed. In particular, use of IMDGs calls for an understanding of distributed computing techniques -- an area where many application developers fall short.
"In-memory data grids are sophisticated software that requires advanced skills," said Pezzini. "These skills are hard to find and expensive." The problem is exacerbated by a lack of standards, he continued. Each product in the market requires a different skill set, so knowledge of IMDGs cannot be recycled.
The learning curve doesn't stop there.
"One of the pitfalls we run into is that people don't conceptually grasp that an in-memory data grid is spanning a set of servers," said William Bain, CEO of ScaleOut Software. "Most IT people view it as running on a single server."
That misconception can lead to provisioning and deployment mistakes, Bain said. He suggested making sure that there is enough networking bandwidth between servers before implementing an IMDG.
At the same time, a solid monitoring and management strategy is critical to success with in-memory technology. A large data grid might span several hundreds of servers -- a significant management challenge.
"Firms must add another tier to their architecture," explained Forrester's Gualtieri. "This tier is between the database and the application server. Another layer means more technology to manage and new design considerations for application development professionals."
Despite these obstacles, Brinker predicts that several factors will continue to push IMDGs into the mainstream.
"Memory prices have come down so much that now it's practical to have very large memory systems in place," he said. "Networks have also become more reliable, so you can rely on in-memory computing much more. And the business factor driving all of this is competitive pressure for businesses to make faster and faster decisions."
Follow us on Twitter at @SearchSOA.
This was first published in March 2013