Have you been following the evolution of OData? “O,” what, you ask? OData (Open Data Protocol) is a Web protocol for querying and updating data. It provides a way to unlock your data and free it from silos that exist in applications today. It does
this by defining a common set of patterns for working with data based on HTTP, AtomPub, and JSON. Started originally at Microsoft, OData has been gaining adherents. For instance, IBM uses OData to link eXtreme Scale REST data service with clients.
Meanwhile, eBay is making available an OData API that supports search for items available via the eBay Finding API, using the query syntax and formats in the Open Data Protocol. Other high profile users include Facebook, Netflix, and Zillow. Lance Olson, Group Program Manager at Microsoft, took time to answer some questions about what's new with OData and where it is going.
Briefly, what is OData, how and when did it get started?
Olson: OData got started in a project at Microsoft that is now called WCF Data Services. We were looking for an easier way to deal with data that was being passed through a service interface. We noticed that many of the current interfaces, whether based on SOAP, REST, or something else were all using different patterns for exchanging data. This method would then return a bunch of Customer records. You might then decide to add paging support. Then you'd want to bind the results to a grid and sort the data. Finally, you'd decide you want to query by something other than, say, Zip code.
This approach has led to a couple of key challenges.
First, there is no way to build generic clients that do much with these API's because it doesn't know about the ordering of the parameters or the pattern being used. Because you can't build a generic client, you have to build clients for each API you want to expose. The simplicity of basic HTTP API's helps with this, but it still is very costly. The growing diversity of clients that talk to these API's only exacerbates this problem.
The second problem with this pattern is that it forces the service developer to make a difficult trade-off. How many queries should I expose? You have to do a balancing act between exposing everything you can possibly imagine and exposing too little such that the value of the service is diminished. The former leads to a proliferation of API surface area to manage and the latter results in what it often called a "data silo" where critical data is locked up in a particular pattern which is often unavailable to other applications simply because it doesn't expose the data in quite the way that is needed by that application. Services tend to live a lot longer than a single application, so you really need to design the API in a way that will last, so it isn't great if you find that you need to keep adding new versions of the service interface as you build new clients. In many cases the service developer and the client developer aren't even the same person, so making changes to the service interface ranges from difficult to impossible.
With OData we took a very different approach. Instead of creating custom signatures and parameters we asked the following question: "What would a service interface look like if you treated data sets as resources and defined common patterns for the most frequently used operations such as querying, paging, sorting, creating, deleting, and updating?" This brought us to the creation of OData. OData solves the key service interface design challenges that are mentioned above. Most importantly, OData enables client applications and libraries to be written once and then be reused with any OData service endpoint. This has enabled a broad ecosystem of clients ranging from .NET on Windows to PHP on Linux, Java, Objective-C on iOS, etc. At Microsoft it has also enabled us to go farther than we could with traditional service interfaces by adding support to products that go beyond developers.
Over time, we continued to see customers using the protocol for a broader set of scenarios than what you could do by using WCF Data Services. It was increasingly clear that we needed to be able to talk about the protocol on its own. As a result, in the Fall of 2009 we launched the OData site and began talking about OData in Microsoft's Professional Developers conference. This was followed up with a much larger presence in March of 2010 at Microsoft's MIX10 conference which is when OData really began to get broader coverage on its own.
What is Microsoft's role now?
Olson: Today Microsoft has published its .NET client for OData under the Apache 2.0 license on the site. Microsoft has also released the OData protocol specification under its Open Specification Promise encouraging use by anyone who could benefit from it. Anyone who wants to do so can use OData. People interested in participating in the design can join in on the discussion via the OData mailing list.
Who seems to care about OData and who should care?
Olson: If you're using services to exchange data between two endpoints, you should take a look at OData. OData fits best in cases where you have services that share data, like in the GetProductsByZip example. As the number of client platforms continues to grow at an increasing rate, or the need for Web properties to have a great API is becoming a requirement if you want to be relevant for consumer facing Web sites. If you're in the enterprise, similar pressure is being felt to create applications that work well for employees, which often means providing key business information on the devices they use. OData applies to a broad set of industry categories ranging from consumer-facing Web to the enterprise to the public sector. I'm seeing customers use OData across these segments. There is a list of public services and server implementations available and client/application implementations which gets updated whenever we hear about new implementations.
How is OData progressing, what has changed, who is involved?
Olson: OData is officially just over 1 year old. For the first year we've seen amazing uptake as people continue to look for ways to better scale their API's and deal with the diversity of the client ecosystem. Multiple new open source community implementations have cropped up giving people fairly broad coverage across languages and platforms. While Microsoft is using OData in many of its products and services, there are also a number of external implementations like IBM WebSphere, Facebook's Insights service, and the Netflix catalog. We're also seeing a lot of growth inside of the firewall.
What is the outlook for the near future? What do you expect to happen in 2011 and what will this mean for the IT community?
Olson: One of the most significant things we'll see for OData in 2011 is the rollout of support for OData across a number of key server products in the IT space. That's exciting because it will continue to bring more momentum to the ecosystem, and that creates a positive cycle for the community. We'll also see more client tools and applications getting on board, like Tableau, which make it easier to visualize, analyze, shape, and combine data from all kinds of different sources to get a better view on the problems we need to solve.