Selecting a SOA testing tool

Choosing a SOA testing tool can be tricky and, if done incorrectly, can diminish ROI. Learn what to consider when selecting a SOA testing tool.

Traditionally, software testing followed a black-box approach, which is effective for monolithic systems often

accessed via a graphical user interface. As IT systems got more and more complex, modular and interconnected, newer testing methodologies started evolving. Service-oriented architecture and similar component-based architecture styles are making the IT systems more modular, standardized and reusable with an objective of better business-IT alignment. This has forced testing experts to make a paradigm shift from an application-centric testing approach to a more business-process-centric testing approach. The resulting gray-box testing methodologies, like SOA testing, focuses on the validation of granular business components and their relationships rather than black-box application testing.

The term SOA testing is often frowned upon by idealists because SOA is only an architectural style and hence it does not give a precise meaning when you add the word testing to it. Nevertheless, the term is now being used for all kinds of application interface testing without a graphical user interface (GUI), mainly aimed at testing "headless" components such as Web services, enterprise service buses (ESBs) and process models. This type of testing always requires a tool to simulate the consumers of these interfaces or sometimes to "stub out" the providers that these interfaces consume. This is why we see a wide range of tools in the market tagged as "SOA testing tools."

The term SOA testing is very broad, and sometimes it is difficult for a consultant to recommend a tool that would best fit the testing needs of an organization without doing a complete analysis of its architectural and testing landscape. Moreover, license costs for these tools are very different from each other. Some tools are free, a few of them charge nominal amounts and the rest are expensive. This makes the tool selection even more complex, as you could take a drastic hit on your ROI by selecting the wrong tool -- either because it is under-fit or over-fit (underspecified or over-specified). A structured analysis of the testing needs when selecting a SOA testing tool helps achieve better ROI in the long run. Hence, it is important to understand the use parameters while selecting a SOA testing tool.

Is another tool for SOA testing really required?

Determining whether you need a SOA testing tool is important. Be sure to look at the size of the implementation and analyze if SOA testing is going to add any additional benefit, or if it would just be an overhead. It is futile to plan additional testing for a few application-specific Web services when they are created not much earlier than the application UI.

On the other hand, enterprise services affecting multiple consumers are good candidates for standalone testing. Similarly, it's important to test an ESB that connects several systems because it is a critical failure point. Early testing, faster testing and improved coverage are three main areas of concern in large SOA implementations.

Once the need is established, look around to see what you already have. Testers and developers have always been innovative about creating utilities to aid testing. Perhaps some mart people in the organization have already developed some interface testing tools. Those tools need to be collected and analyzed along with the other tools using several parameters.

Transport Protocols help establish connection between the consumer and the service. As the tool simulates the consumer in the case of SOA testing, it must provide support for the required transport protocol. Support for common protocols like HTTP/HTTPS, JMS, FTP and JDBC are necessary to perform SOA testing efficiently.

Depending on the middleware or enterprise service bus product implemented, support for the protocols provided by the middleware also should be evaluated. For example, an IBM WebSphere Message Broker ESB needs support for IBM WebSphere MQ.

How well can the tool handle message protocols between systems?

Messages between the service provider and consumer are exchanged in predefined formats. The testing tool should be able to read and interpret these messages. In developer terms, the tool must "parse" them and help the tester read from or pass values into individual data elements in the message. Dynamic creation of request messages for testing different scenarios should be supported through a data-driven framework where data is supplied from an external source like a comma-separated values file or database.

XML-based implementations are commonly adopted for SOA deployments, but there may be other message formats like JSON or flat strings with delimiters. So, the tool should ideally support the common message formats that exist or are likely to be adopted. Support for the common architectural standards followed -- such as SOAP or REST -- is a must. In the case of SOAP-based Web services, the tool should be able to parse Web Services Description Language (WSDLs) and produce request and response structures for automated simulation and validation.

Some tools may be architecturally very rigid and may not have a standalone feature that's necessary for message format support. The support may be available only when the message format is used with a specific transport. In other words, the tool has the message format support tightly bound to the transport and can be used only when we strike a combination of both. For example, a tool may support a data-driven approach for XML request generation only when we supply a WSDL or REST service endpoint, but may not support it when an XML has to be sent over JMS.

If there is an industry-specific standard for messages, such as Electronic Data Interchange, the tool should be evaluated for those too. If they are not supported, you need to look at the extensibility options provided by the tool.

What level of test automation support do you expect from the tool?

Automated request generation, response validation and database validation are typical expectations from the SOA testing tool. Automated request generation is supported in a data-driven framework, but automated response validation and database validation are achieved through predefined or custom assertions/checkpoint features provided by the tool and the transport protocol support such as HTTP, JMS or JDBC.

While standalone service testing is the first step of SOA testing, testing the integrated set of a cluster of services or the orchestration implemented in an ESB or an aggregation service is the next. Automation of such business process testing requires a tool that can model each service call as an individual test step; the result serves as an input to subsequent steps. It could either be in terms of transfer of values from one step to the other or a conditional execution based on the results of the previous step.

For nonfunctional testing such as performance testing and security testing, specific parameters need to be applied. For example, a good Web service performance testing tool should help in modeling the workload mix, capture response time and throughput of the Web service and report the system use characteristics while the service is under load.

A good security testing tool should be capable of validating service compliance on security standards set by W3C (SSL, XML Encryption and XML Signature), OASIS (XACML, WS-Security and WS-Policy) and WS-I (Basic Security Profile). The tool should also conduct penetration testing by simulating attacks through alteration of message structure and content.

In addition, the tool should be able to validate implementation against governance standards such as those specified by W3C and WS-I. There can also be support for industry standards, including ACORD for Insurance and STAR BOD for Automotive.

Service virtualization: Something worth trying?

Service virtualization is an emerging concept for dealing with service interdependencies. A developer might call this "stubbing," but the term service virtualization means more than that when it comes to evaluating a tool.

Mock services

The basic expectation from the tool would be to build a mock version of the service that is yet to be developed. The mock service could check for the following:

  1. Service to be tested – Automation scripts created would need to be validated before execution. A mock service helps in this process.
  2. Dependent service – The service under test may be intended to consume another service which is yet to be developed. Mock service helps in stubbing out the service yet to be developed.

Virtualized services

Often, there are real services, which are access constrained and able to impact other services under test. It could be third-party services/databases or a service that is available on a pay-per-use model. It is essential to virtualize such services to avoid dependencies with the real ones and delays in testing. The tool should be able to record the traffic to these services and create intelligent virtual services. This feature usually does not come with the standard testing tool but is packaged as a separate product by most SOA testing tool vendors.

Customization of tool features

It's rather tough to find a tool directly off the shelf that meets all requirements for testing and test automation. In many cases, extensibility of the tool becomes a factor in closing the gap between available features and required features. There are two ways of extending the tool’s capabilities. One is through development of reusable components that can be deployed into the tool’s interface; the other is through custom scripting each time you come across a specific scenario. Often, the first way should prove more beneficial than the second.

If you are looking at customizations, make sure the technologies approved by your organization are in alignment with what the tool recommends. For example, some tools use Java-based frameworks, while others may be using a .NET framework with scripting languages such as C#. What suits your organization's technology standard is an important consideration.

Once you have evaluated and rated the technical features of the tool, shift your focus to two commercial issues: One is the tool cost, the second is the pricing model. Many tools offer feature-based licensing such as a separate license for functional testing, performance testing and service virtualization, or it could be an independent license for  each transport or message format support.

For example, consider two tools -- one supports functional and performance testing in the base model, whereas the second supports only functional testing in the base model, with performance testing as a separate license. If you do not need performance testing with the SOA testing tool (because you already have an established performance test framework), it may not be wise to invest in the first tool.

In another instance, a tool supports creation of mock services from WSDL in its base version and real service simulation as a highly priced separate license. Another tool has both mock service and virtualized service packaged together as an add-on license at the same price. Buying the second tool just to create mock services would be an unnecessary expenditure.

Hence, it's important to look at what you actually need when you decide to buy a tool. Buying features that are not going to be used will take a drastic hit on the ROI. At the same time, not buying the needed features could also lead to the same effect.

SOA testing involves dealing with a lot of heterogeneous technology elements that are used to implement the architecture. A clear understanding of your current and future architecture is a mandate before you start the tool selection process.

Consider these important technical aspects: support for different message formats, transport protocols, test automation, type of testing, service virtualization and extensibility features. As part of the process, consider the existing tool set and how well you can use those tools. The commercial part and flexible tool licensing are unavoidable in the process. If we give these parameters their due diligence, ROI can hit its peak. Happy tool selection!

About the Author
Aravind Parameswaran has over 12 years of experience in software development and testing. He is a senior project manager with Infosys Limited, and is currently the service lead for SOA testing for the independent validation solutions unit in retail, consumer packaged goods, logistics and life sciences verticals.

Follow us on Twitter @searchsoa

This was first published in September 2012

Dig deeper on Web services testing

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

3 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchSoftwareQuality

SearchCloudApplications

SearchAWS

TheServerSide

SearchWinDevelopment

Close