3 Ways to Solve App Performance Problems with Transaction Tracking – Part 2

3 Ways to Solve App Performance Problems with Transaction Tracking - Part 2In part 1, I discussed the important of tracking applications and how that is similar to tracking packages.   However, there is one significant difference between the two, applications don’t have bar codes.  The collecting of the tracking events as the data moves through the application requires additional processing.   There are many techniques for doing this.   The application can generate the events itself, in the form of a log or audit trail.   But in cases where it doesn’t, instrumenting the underlying system is an option, depending on what facilities it provides.   Given an application that executes through DataPower, IIB (Broker) and MQ, Nastel leverages several techniques to create the tracking events.

DataPower

DataPower can act as a front end or an intermediary node.  These flows are one of the key ones that require visibility.  Unfortunately, there is simple no place to do that centrally.  We have found that the best way is to make slight modifications to the flows to collect the required data and send this as tracking events. Using this method, we can track very granular detail of flows that go through as well as failures or performance problems in the flow.  Many application flows already have some form of built in tracking that can be easily leveraged as well.

IBM Integration Bus / IIB (Broker)

The Broker supports a very rich mechanism for tracking the Message Flows.  Without changing the internal structure of the flows as required in DataPower, you can still get to that level of detail, including

  • Transaction Start / Stop (default)
  • When a given node was processed
  • Message content being processed by the flow
  • Track message flows in and across brokers

You have controls within the broker to determine what type of data is sent.  With the Broker, you have the ability to configure more detail about the information you want to send.  Data collected is published to broker topics, which are then forwarded as tracking events.

IBM MQ

IBM MQ provides 2 options to collect the data depending on versions of MQ.

Using MQ API Exits

Available in all versions of distributed MQ, MQ API Exits can be used to capture information as it flows through the MQ environment.  When an application is invoked, the queue manager will pass information about the application to the exit program.  The exit program will look at the application call and data to decide what to do with this information. This procedure allows us to track information as it flows through the application environment and across the sending and receiving channels to multiple queue managers (distributed and mainframe)

Tracking Using Application Activity Trace

With MQ 7.1 and above, the queue manager can be configured to generate the tracking events.  The MQ appliance uses this method exclusively.  The data collected is the same as when using MQ Exits.  The activity trace method has some advantages over the MQ Exit approach including no need to install code into the queue manager, easier to enable and disable and easy to setup for remote access.  But it currently supports limited filtering on the host MQ server which can mean potential for increased network traffic.

Independent of which method is used, the tracking events provide the information need to see inside of the MQ processing.

Managed File Transfer (MFT/FTE)

Many customer are currently leveraging managed file transfer into their applications to integrate files with MQ flows and broker (IIB).  The MFT coordinator publishes tracking events to record this activity. This allows you to see the transfer and any failures.

Summary

As noted in the introduction, the goal is a combined flow across the environment.  You need visibility into one or more of the technologies such as browsers, mobile, DataPower, Broker, Message File Transfer, MQ, Java applications and many more.

Nastel AutoPilot TransactionWorks analyzes this tracking information, interprets the data and produces the global transaction mapping.  When the events track across all of these technologies, we can provide a complete picture of the application flow.  through multiple environments.

Read Part 1 in this 2 part series: “3 Ways to Solve App Performance Problems with Transaction Tracking”.

To learn more, watch the TechTalk here

3 Ways to Solve App Performance Problems with Transaction Tracking

3 Ways to Solve App Performance Problems with Transaction TrackingTransaction Tracking can provide tremendous insight into the performance and behavior of applications. However, this can be challenging when transactions traverse platforms and technologies.  Often it can be similar to tracking someone in the old Westerns where you follow the trail to the river and then lose track of where they went next.  Tracking MQ transactions can have this same hurdle to overcome with MQ running on diverse platforms spanning multiple locations.  MQ transactions typically interact with other platforms such as IBM Integration Bus (Broker) and IBM DataPower.  Visualizing a dynamic flow of transactions across all of these environments is well worth the effort as it greatly simplifies the problem detection process at the same time reducing the mean time to resolve problems (MTTR).

Concepts of Transaction Tracing

Package tracking is an analogy that can be used to explain the concepts behind transaction tracking.  A package is sent from location A to location B with tracking notices generated to let the sender know where the package is in-transit, when the expected time of arrival is and when it actually arrived.    Package Tracking is a combination of disjoint technologies, similar to the middleware environment.  The process of transaction tracking can be complex with cost and timeliness of delivery of major concern.  No matter how fast you deliver a package, someone always wants it quicker. The same problems affect MQ transactions, but instead of a package, it is a message that never seems to get to its destination fast enough.

There are a set of common questions users have about package tracking.  For the customer it might include:   where is my package or is my package progressing as planned? If you work for the shipping company, you are more concerned about: where the bottlenecks are in the system, how to solve a problem, stopping a problem from recurring and what issues can I expect tomorrow. As a technician, you would want to know where your failures are occurring and where you can make improvements.

Package tracking involves: delivering   packages, scanning them and exporting the events from the scanners into a database for later analysis.  The key to tracing anything is to create tracking events that capture the key events such a pick, pack or ship and what time these occurred

Transaction Tracking for MQ

There are a common set of  behavior patterns for MQ.  Each set of patterns can be unique.  Typically, you have senders and receivers as well as queues being processed with one or more queue managers communicating with each other. With multiple applications running on different servers, as in the package tracking example, every time a message gets sent or received, the details about that message and its processing should be captured and sent  to a central location.  This will provide the ability to understand what is occurring. If the transaction is stuck or slow, we know that we can react to it or produce warnings if the transaction takes too long.  We can also gather statistics along the way to see the duration of each step.  Capturing raw metrics about message flows and then correlating them together into a big picture can help the user solve performance problems faster.

Whether you are a corporate manager, Line of Business owner, application support group or IT infrastructure team, you need end-to-end visibility into the transactions that are relevant to you.

Stay tuned for the next installment in this series, “3 Ways to Solve App Performance Problems with Transaction Tracking”.

 

To learn more, watch the TechTalk here

The Best Ways to Solve DataPower Issues that Impact Applications – Part 4 of 4

IBM Datapower[This is Part 4 in a 4 part series. Catch-up with part 1, here.]

Converting DataPower metrics and events into actionable intelligence  

DataPower appliances have several management API’s and interfaces providing detailed information about system operations and performance. By using these interfaces, we can capture a very broad range of configuration and status data. Continue reading

The Best Ways to Solve DataPower Issues that Impact Applications – Part 3 of 4

IBM Datapower[This is Part 3 of a 4 part series. Catch-up with part 1, here.]

Common DataPower problems …

Like any piece of sophisticated middleware, the DataPower Gateway appliance has to be carefully managed and closely monitored.  Even though it is a robust, purpose-built device, any company that is using DataPower appliances to run serious production workloads is going to encounter their share of problems sooner or later.

According to the IBM Redbook for DataPower Implementation and Best Practices, the most common run-time issues are:

  • Configuration changes
  • Misconfigured service policies
  • XML formatting issues
  • Transaction latency issues
  • High CPU usage
  • Memory growth
  • High load
  • File system space issues
  • Network connectivity issues
  • Unexpected restart

A lot of these issues are the result of the rapid pace of adoption of this technology in the enterprise. As more business services are being hosted on DataPower appliances, this leads to accelerated change, which also increases the probability of errors and likelihood that service processes will be deployed incorrectly, or will fail.

Common troubleshooting tasks

In these situations, the people responsible for designing DataPower service processes are recruited to get involved in troubleshooting.

Some of the most common troubleshooting tasks are:

  • Generating error reports, to get a snapshot of system status and debug information
  • Viewing logs, to capture details about processing behavior and transaction flows, and then trying to correlate these with any errors that have been observed
  • Enabling statistics for CPU consumption, memory usage, and transaction rates
  • Searching for latency messages contained in logs, which measure the elapsed time since the beginning of each transaction

However, some of the most granular troubleshooting tools, such as setting the log level to “debug” or using the Multistep Probe, were not designed to be used at runtime.

In fact, IBM recommends that they only be used during development and testing because they are intrusive and generate a large amount of data. These tools actually degrade the performance of the DataPower appliance, and the volume of data they generate can overwhelm the user.

The truth is, troubleshooting DataPower issues with the tool-set that comes with the appliance can be very daunting. Middleware experts often have to spend hours scanning and analyzing logs, tracing transaction flows and measuring application performance metrics.

Manual analysis of logs and application metrics is a very tedious and costly endeavor. When business processes fail or misbehave, it can be very challenging to attempt to “piece together” a story from these log entries and data points.

To learn more about solving DataPower problems, read Part 4, the last installment in the series,  “The Best Ways to Solve DataPower Issues that Impact Applications”.  Continue reading about how situational analytics can help you get the visibility you need to solve problems faster using real-time metrics and transaction analytics.

For more information on DataPower check out the on-demand TechTalk, “3 Ways to Solve DataPower Issues That Impact Applications”.

The Best Ways to Solve Datapower Issues that Impact Applications – Part 2 of 4

Monitoring DataPoweClick to see the full-size infographic, 3 ways to instrument DataPowerr appliances

DataPower appliances have several management interfaces that provide detailed information about system health, operations and performance. These metrics can be monitored through several management protocols, including: SNMP, WSDM (Web Services Distributed Management), logging and other XML-based API’s.

Authorized management and monitoring tools can subscribe to information about the appliance using these protocols, allowing administrators to access and capture a very broad range of configuration and status data, including: Continue reading

Was your software vendor acquired?

Nastel is the better and safer bet for middleware monitoring and management

Recently, one of the “big four” software firms was acquired by a group of investors led by Bain Capital.  This is good news as this shows that there is demonstrable health in the IT Sector in that investors are willing to take the risk and purchase a software vendor.  Continue reading