The explosion of interest in application componentization, component reuse and orchestration of workflows based on business process needs has stimulated broad adoption of service-oriented architecture
Anyone who uses SOA understands that inter-component information flows can create performance issues if components are divided across data centers in a cloud implementation. Some users have already been forced to consider the system-connection implications of workflows even inside their data centers. Any combination of SOA and the cloud demands that the architects involved review the impact of cloud connectivity and cloud server performance on the workflow and the application overall.
Server performance issues are readily addressed because SOA components run on systems with various levels of performance. However, there is an inherent variability in the use of any form of virtual or shared resource for SOA component hosting. It is important to be aware of any cloud adoption situation where one or more SOA components are moved from a dedicated resource into the cloud. Plan for the lowest level of performance exhibited in the pilot operation and you're safe.
Optimizing the network side of a SOA-cloud combination
Map the application's workflows against a block diagram of possible cloud hosting points.
The first step in optimizing SOA-cloud performance is to map the application's workflows against a block diagram of possible cloud hosting points. There is a potential for performance issues to arise anywhere a workflow passes across a cloud boundary and, in most cases, even higher cloud costs. These can be mitigated at a low level or by more systemic changes to the architecture of the application itself.
Mitigating workflow-related performance problems by accommodation means either increasing connection performance or altering the position of application components in the cloud. Most architects should consider both options, evaluating the additional cost of connection capacity against the loss of flexibility when you constrain where software can run in a cloud application.
Users report that where accommodation is possible, it's likely the most effective solution to cloud performance optimization for SOA applications because it reduces development, testing risks and costs. Where it's not possible, a more architected solution is indicated.
Using cloud bursting
The state-of-the-art solution to application performance issues in a hybrid cloud is cloud bursting between the data center and the cloud when response times exceed design goals. In a pure cloud application, this process is called horizontal scaling because components of the application that can become bottlenecks are duplicated to enhance performance. The challenge with this approach for SOA applications is that most SOA software components are stateful, meaning an entire transaction has to be supported with the same set of components to avoid loss of information.
It's possible to design SOA workflows to accommodate multiple copies of components. This involves the introduction of a work-scheduling function in the application's flow. The function commits a set of subordinate components to a transaction and releases them when the exchange the transaction represents is complete.
More on cloud bursting
How to create a robust cloud bursting implementation
Advice to ward off cloud failure
Can SaaS applications simplify cloud bursting?
This form of internal scheduling is easiest to apply when the execution of the components involved doesn't require sharing resources like a database. Where that's the case, the problem may be that the bottleneck is that resource, and replicating it may be impossible.
If it's not possible to re-architect the SOA portion of an application for the cloud, consider using a cloud layer on top of the enterprise portion of the application. The goal is to create what's effectively a user front end that will support the portions of the user-to-application dialog that are not dependent on single-threaded resources. This cloud layer can be written using RESTful principles that facilitate load-balancing among components and horizontal scaling under load.
Using RESTful principles
Architects who have used this approach report that the key to its success is a restructuring of the workflow of the SOA applications to front-load the applications with activities that can be supported using modest and fairly static repositories, rather than requiring tight integration with a transaction-driven database. Any time a database has to be updated so that successive activities in the application see the result of prior activities, the process creates a natural performance bottleneck that's hard to relieve. Structuring the application to make this kind of database update a final step in a process, rather than an ongoing element, can pay dividends by allowing the early steps of an application to be transferred to the cloud and scaled elastically based on load.
The classic example is a Web/cloud front end to a retail fulfillment application -- the Web store. In this application, the fairly static product catalog can be supported in the cloud at a reasonable cost. All user activity in browsing for products can be supported from this catalog, and in some cases it's possible to support a shopping cart. When the user decides to commit, the transaction is passed to the existing SOA retail application, which, because it's used less, creates less impact on performance under load.
SOA principles matured before the cloud, but it's possible to make SOA applications cloud-friendly. Possible doesn't mean automatic, though. Considering each of the steps outlined here will help to ensure that your cloud plans don't collide with cloud performance issues.
About the author:
Tom Nolle is president of CIMI Corp., a strategic consulting firm specializing in telecommunications and data communications since 1982.
This was first published in January 2014