When performing application load testing, it can be very difficult to predict the volume of traffic a new Web service
will be subjected to. Even if initial predictions are accurate, a Web service often grows in use as external consumers outside of the enterprise connect into it. To cut down on performance risks, it is important to measure the performance of Web services separately from related applications and to keep the results documented, said Sagi Varghese, QA manager at JetBlue.
"Once you publish a Web service out into the community, you don't exactly know what the usage is going to be," said Varghese. "So what you want to do is independently measure the Web service and publish a handbook that says 'this is what we have measured the Web service to do; this is the performance.'"
With most Web applications, there is a fixed demand in terms of how many concurrent user you can expect, said Varghese. And when a test team knows what kind of traffic to expect, load testing will provide more true-to-life results. Web services, however, can be hit with traffic from users as well as applications and other services. This makes it important to keep performance metrics up to date.
Specifically, testers at JetBlue look at number of connections and compare it to response time. If response time for a Web service is anything exceeding one second, it could be problematic, said Varghese. His team will see how many concurrent connections the service can handle before performance starts to suffer. Average response time tends to be less than .5 seconds, he said.
When an upper ceiling of concurrent connections has been found, a limit is set preventing more than that number of users to consume a Web service at a given time. To prevent a drop in performance, the overflow of traffic can be put in a queue.
Related services testing information
Web security: Web services an overlooked entry point for attacks - SearchSoftwareQuality
JetBlue enters SOA airspace- SearchSOA