Q

How do I balance throughput requirements and interoperability?

Are there any rules of thumb that would make one decide to do in-process interop versus Web services interop based on throughput requirements?

Are there any rules of thumb that would make one decide to do in-process interop versus Web services interop based on throughput requirements? In other words, if I have throughput issues are Web services a bad choice?

Interoperability remains a problem essentially because of developer choices and a focus on what's needed to develop individual applications. If someone were to design their programs explicitly for interoperability it would be less of a problem. But, developers like to use the advanced features of programming languages such as C#, Java, Ruby, SQL, XML, Perl, JavaScript and so on, which means taking full advantage of the built-in language...

data types. Interoperability studies show an inverse proportion between the complexity of the data types used in a program and its ability to interoperate with other programs, especially with programs written using other languages.

If you search the Web for the old SOAPBuilder results, for example, you can clearly see that the fewer the data types, the greater the interoperability level, especially when working with complex and structured data types. These advanced data types may give the developer a lot of power and flexibility and result in more compact code, but they interfere with interoperability since different languages do not support the same collection of data types, especially not the same set of complex data types. Web services attempts to resolve this problem by mapping individual language data types to XML data types, but the mapping is never 100% and never guaranteed to be automatic.

Performance is an issue with respect to interoperability, so if you have the luxury to stay in the same environment, such as Java EE or .NET, you are better off using a native communication protocol, such as RMI for Java EE and the Microsoft RPC for .NET. If you need to interoperate across Java EE and .NET programs, however, Web services is pretty much the only solution. In that case it's necessary to trade the hit in performance for the ability to communicate programmatically.

Another place to look for means to achieve interoperability is in the area of asynchronous messaging protocols, such as JMS or AMQP. REST/HTTP fits this area as well, at least conceptually (although not in execution— more about that later). The main concept for achieving interoperability this way is that you can avoid using an interface with data types. Instead, you can just put all the data into a big semi-structured message then pass it around like a file (which it is, basically). The program on the sending side is responsible for packing the data into the message, and the program on the receiving side is responsible for unpacking the data and doing something with it, such as accessing a database. This provides a great deal more flexibility in dealing with data type issues but comes at the expense of more complexity.

REST/HTTP is a special case of this because REST/HTTP defines a fixed interface and allows for content type negotiation. The design center is based more on generating and consuming the representations of resources than it is on defining and packing/unpacking messages per se (although conceptually something very similar is happening).

This was first published in June 2009

Dig deeper on Representational State Transfer (REST)

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSoftwareQuality

SearchCloudApplications

SearchAWS

TheServerSide

SearchWinDevelopment

Close