In the first three articles of this series, I covered several of the major capability areas that API management tools should provide, including engaging with prospective consumers through a portal, managing the lifecycle of your service, exposing services through integrations with legacy systems and enforcing the policies associated with your contracts. In this last installment, I'll cover what I believe is one of the most important...
aspects of API management: instrumentation, analytics and reporting.
There is an old management adage, "You can't manage what you don't measure." If an organization is serious about not just exposing APIs and services but managing them, key metrics must be measured and used in the decision-making process.
Measuring and using metrics
It is important to know that measurement is more than setting some thresholds and reacting when the red light goes off. If data is not being collected, analyzed and used for decision-making on a regular basis, an organization make be reacting, rather than managing. Therefore, to truly manage APIs, management tools must be fully capable of providing a robust set of information that enables management decisions.
In the previous article, I covered policy enforcement capabilities that can be viewed as an automation of data collection, analysis and decisions. The gateway collects usage information, verifies that the active usage is within the boundaries set by the contract, and, if not, rejects or throttles the requests accordingly. In order to do this, metrics must be collected that are solely based on the inspection of traffic flow. This should include table stakes such as request counts, response time/latency, and message sizes.
You'll need a flexible mechanism for capturing information from the request and response messages themselves and then to know how that information can be indexed, sliced and diced for further analysis. Be sure to evaluate the architecture for capturing these metrics and ensuring they can be stored for later use without risk to the underlying gateway.
Analyzing gathered data
Once this information is obtained, look beyond simple throttling and immediate operational concerns and focus on how this information can be analyzed and used for management decisions. Trending is a valuable feature. Can the analysis determine that usage by a particular user is steadily increasing by 10% a month and approaching a threshold level?
If service use is being charged for, perhaps this becomes an opportunity to increase revenue through a proactive interaction, rather than deal with a reactive approach to angry consumers whose requests are being throttled or rejected. Independent of throttling, is the same information being used to monitor organic growth and ensure capacity remains adequate? Are the patterns of access as expected? While all companies strive for no failures in production, a better goal is to strive for no unexplained behavior in production. You can't do this if you aren't watching for trends.
Beyond runtime concerns, is information on API portal usage being collected? If APIs and services are thought of as products, it's important to think about whether everything is being collected from the interactions with the API portal to ensure prospects are finding the right APIs and services, and that those prospects are turning into customers.
What is the average turnaround time from prospect to contracted customer to actual use in production? How frequently are modifications to APIs and services needed to support new customers? What APIs and services and consumers are commonly used in tandem, possibly creating upsell opportunities for consumers that are only using some of them?
API management tool features
These features are certainly a must if you're going to offer a public API catalog. If only applied internally, this product manager thinking is still appropriate. In an internal environment, there is more risk for API sprawl. The typical project-based IT culture creates pressure to build for what that project needs and results in an increasing number of closely related services. An API management tool must be able to look at an entire portfolio of APIs and services and give measurements for the degree of redundancy in the portfolio.
This means it needs to slice and dice a portfolio in different ways, whether through keyword matching within the actual interface, tagging, mapping against business capabilities, or other metadata chosen for classifying services. Don't forget version sprawl, as well. Can the API management tool show usage information by version? This information can help you decide when to gently nudge consumers of older versions and when to cut them off entirely.
Finally, as a portfolio grows, there is increased risk in the long tail. Make sure reports can be run to see when the last time services have been modified or new consumers added and review this on a regular basis.
It's OK if this sounds like a lot of capabilities. Like anything else, there's a maturity process that is going to be driven by specific needs and the demands of API consumers. It's still important to evaluate vendors on these capabilities because requirements will change, making it important to ensure that the technologies chosen can mature as time goes on.
About the author:
Todd Biske is an enterprise architect with a Fortune 50 company in the St. Louis metro area. He has had architecture experience in many different verticals, including healthcare, financial services, publishing and manufacturing, over the course of his 20-year career.