Testing is becoming a hot topic in SOA policy and governance circles, said Miko Matsumura, vice president of technology...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
standards at Infravio, Inc.and organizer of the SOALink initiative. Yesterday, SOALink, a group of companies focused on delivering interoperable solutions for SOA deployment, announced four testing and quality assurance vendors have joined the organization. The new members are iTKO, Inc., Mindreef, Inc., Parasoft Corp. and Solstice Software Inc.
In honor of the occasion Matsumura gathered the executives from the new SOALink member companies virtually via a conference call to discuss, among other topics, the question of how you test something as unpredictable and unforeseeable as Web services interacting inside a service-oriented architecture. Here is some of what they had to say:
Chris Benedetto, vice president of marketing, Solstice Software: Testing in a Web services environment is extremely difficult, for a variety of reasons. Some of the ways that we get around that is through the automatic creation of unit and regression testing and component tests and then a correlation between all of those tests, between all the different discreet services. Like a barcode is a unique license plate number for a product, you can do correlation amongst a variety of different services in an automatic way, using automated test tool. So, automation is one element.
Automating the creation of test tools and creating a correlation between the different services is another tool. And a tool that we've talked about [at previous SOALink meetings] is simulation and recording. We offer that and some of the other vendors have similar capabilities. Recording and simulation does exactly what it says. It records, you put listeners on a network or on a service and you can actually record the inputs and the outputs and compare it to a baseline.
And simulation similarly does the same kind of thing. It plays back recordings or you can actually simulate on available or even completely broken systems or systems that haven't been built yet. That's how we handle it.
Wayne Ariola, vice president of corporate development, Parasoft: Most folks understand the fact that you're working in a rich environment increases the complexity. So, I think every vendor [here] today has probably done something fairly unique. Everyone [here] has committed to a very unique solution that really addresses the complexity of the XML itself. I would like to call this class of solutions we all represent as "SOA aware."
And what that means is that not only do we have the facilities to take into account the complexities of the environment and perhaps to simulate as well as to emulate some of the calls or some of the implications or the expected inputs, outputs that we see within a business process, but we've also developed a platform that give us the capabilities to assist someone to very, very rapidly get into an environment where they are able to test this. So, the interesting part about this is the fact that, if you're in an SOA environment, knowing your intermediaries and being able to deal with them in a way, such as consuming policy, understanding policies, understanding some of the contracts associated with these service assets gives you such a leg up to assist somebody to move forward fast. Our goal is to eliminate the complexity of this type of testing.
In short, what an SOA aware environment means is that you're aware of the intermediaries associated with the transaction as well as you're aware of the partner links as well as the service invocations that are around you and you're able to actually emulate that in a complex scenario allowing the developer, at first, and ultimately the QA or the analyst to exercise a broad range of scenario tests.
Frank Grossman, president, Mindreef: One of the things we talk about because it's a very difficult thing is socialized quality. Why is it that we trust things like Google? Why is it we trust their search? Why is it we trust Wikipedia? It has to do with that the more eyes that are on this and the more it can be publicly changed and modified, you have a higher trust. So that's sort of what we look at with this.
You can't have instantaneous trust with everything, but by knowing that there are multiple eyes on this and I come in and look at the service and I'm looking at it from a different way than they thought three years ago when they were building it, gee I'd better add some tests to the test frame to show that for sure it works that way and now the next person coming in has even more trust because they see that there are even more eyes on it. So, the socialized quality is pretty important to this because in most cases there is not a single overall person or entity looking over a source of architecture saying, how's the quality doing today? It's a very socialized piece, very much like the Web is.
Jim MacKay, chief marketing officer, iTKO: It is complex and it is tough to test. And we touched on simulation. I think that's an important aspect to this. You need to think of this thing through the entire lifecycle and this allows you to be able to, number one, isolate the particular parts for testing, but number two, allows you to be able to supply some context to SOA – context as far as the different services that you have as part of an end solution.
It's also an interaction as you start to get into things such as versioning. And simulation is a great tool to allow you to do that. When we talk about simulation, it's not record playback type simulation, although that's part of it. It's really being able to, and I like Wayne's choice of the word, emulate. It's really the ability to be able to emulate what one side of an interaction would look like and see what the reaction is on the other side. And to do that in an isolated or contained environment, so that you can get to the point where you really – using an expression we like to use quite a bit – you "test the implementation, not the interface." I think that's going to become more important as you think about how SOA is very loosely coupled and the services that you're talking to are late binding. The one thing that you can't loose track of in all this is that you need to insure the service does what you expected it to do in the first place.
In the old days we talked about matching it back against requirements, but in the new model what we're really talking about is the whole notion of policy. Being able to map that back and ensure that you're still meeting the needs and meeting what you laid out in regards to the policies. I think what's going to come out in regards to SOALink and coming out of some of the individual relationships, thanks to Infravio, is the fact that the whole notion of a policy now extends to cover policies that are across the entire lifecycle of the service.