IT pros know one of the challenges in application lifecycle management is the lack of a statistical, measurable,...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
enforceable foundation goal for ALM. Application metrics have evolved as a way of measuring development and testing project progress, and there are a growing number of products available to collect and visualize data in an ALM flow. To make application metrics work for SOA ALM, you have to establish an application or component hierarchy, measure and weight ALM phase data correctly, and be as rigorous in managing changes to the process as you are in collecting the data. These three requirements are interactive, not sequential, so integrating them correctly is critical.
Application metrics cover a growing amount of ground every day, but in general they are divided into three areas:
- Satisfaction metrics: The measurement of user quality of experience (QoE) with the application
- Efficiency metrics: The measurement of how the application performs in a live environment
- Project metrics: How the ALM cycle itself meets goals and schedules
Fitting these together in SOA projects is complicated by the component nature of SOA applications.
User satisfaction metrics should include measurements of both "objective" QoE, meaning application response time, and "subjective" QoE, meaning the way the application meets expectations in worker support and the time needed to make adjustments and corrections. Since this satisfaction data is obtained from the user side, it's independent of whether the application is SOA-based or of how components are reused.
The critical element in using application metrics with SOA ALM is the development of a two-dimensional hierarchy of interpretation. One dimension is the "componentization" of applications, particularly where multiple applications reuse components. Satisfaction information has to be logged against all components of the application so problems with a specific component in multiple application appearances can be noted. The other dimension is the question of process trigger versus process detail. Efficiency and satisfaction changes can be a trigger to alter the processes themselves, meaning changing application or component functionality, or altering the project's processes -- the ALM decisions and flows.
The third stage in SOA ALM is to measure the ALM processes themselves against their objectives and schedules. Most ALM users will test their processes against their goals and schedules, but this can miss the important question of whether those goals and schedules align with actual project requirements. Planning-stage failures in projects often result in misalignments of this type, and conventional goal analysis will miss them because the goals themselves are wrong. The satisfaction metrics gathered on the project can detect this problem of goals, and realign goals and schedules to meet expectations.
The critical element in using application metrics with SOA ALM is the development of a two-dimensional hierarchy of interpretation.
Successful users typically start by gathering user satisfaction information, in terms of both goal and schedule conformance at the project level, and gathering worker QoE with the application in a performance and functional sense. This requires a combination of surveys (in most cases timed to coincide with project release schedules) and monitoring of response times and other worker metrics. One useful data element often ignored at this stage is how long workers spend on a given screen; too long may indicate confusion in data presentation.
The output of a satisfaction analysis is a combination of functional problems with applications and problems with project management. It is important to integrate these two quickly, meaning that functional problems with applications shouldn't arise in a properly managed project even if users don't equate the problems with the failure of ALM processes. The best approach is to start by looking at how functional problems were created (and missed in testing) and presume the project's goals and processes should be changed to address this. The revised project goals and processes are then reviewed against satisfaction problems that are specifically linked with ALM project management.
Remember the component dimension of the interpretative hierarchy. Problems with SOA applications are often issues with a set of specific components that, when reused, will affect other applications as well. That reuse property can be helpful in identifying which components are at fault (the ones in common), and in predicting where future problems might arise (in applications that use those components, even if they aren't reporting satisfaction issues yet).
Where application functionality or performance problems can't be traced to specific components shared by applications, there could be a problem with the orchestration process or the pool of application resources. Most users don't find it helpful to dive into server and storage utilization or network delays at the start of an application metrics analysis because it creates too many metrics and too confusing a framework of data to interpret. If a performance problem has been isolated through metrics, or when it's determined that no component-level faults can be identified, diving down to resource and network data might be necessary.
To analyze resource-level data, correlate the resources with the components first. Most project teams understand which of the applications' components are most likely to be heavy users of resources of any type, or otherwise contribute to problems with application response times or functionality. Measure these resources to identify potential problems in situations where workers satisfaction metrics are low. Always start with satisfaction problems, move to related component identification, look at component-level metrics, then track resource usage. Other patterns create a disorderly analysis that will be difficult to relate to project changes even if a problem is found.
When metrics have uncovered a problem and driven a solution, imposing the solution in redeployments should now be a goal of ALM for the application. Closing the loop here ties application metrics analysis back to user satisfaction through the ALM project processes -- the ultimate goal for any development organization.