3 Ways to Solve App Performance Problems with Transaction Tracking – Part 2

3 Ways to Solve App Performance Problems with Transaction Tracking - Part 2In part 1, I discussed the important of tracking applications and how that is similar to tracking packages.   However, there is one significant difference between the two, applications don’t have bar codes.  The collecting of the tracking events as the data moves through the application requires additional processing.   There are many techniques for doing this.   The application can generate the events itself, in the form of a log or audit trail.   But in cases where it doesn’t, instrumenting the underlying system is an option, depending on what facilities it provides.   Given an application that executes through DataPower, IIB (Broker) and MQ, Nastel leverages several techniques to create the tracking events.

DataPower

DataPower can act as a front end or an intermediary node.  These flows are one of the key ones that require visibility.  Unfortunately, there is simple no place to do that centrally.  We have found that the best way is to make slight modifications to the flows to collect the required data and send this as tracking events. Using this method, we can track very granular detail of flows that go through as well as failures or performance problems in the flow.  Many application flows already have some form of built in tracking that can be easily leveraged as well.

IBM Integration Bus / IIB (Broker)

The Broker supports a very rich mechanism for tracking the Message Flows.  Without changing the internal structure of the flows as required in DataPower, you can still get to that level of detail, including

  • Transaction Start / Stop (default)
  • When a given node was processed
  • Message content being processed by the flow
  • Track message flows in and across brokers

You have controls within the broker to determine what type of data is sent.  With the Broker, you have the ability to configure more detail about the information you want to send.  Data collected is published to broker topics, which are then forwarded as tracking events.

IBM MQ

IBM MQ provides 2 options to collect the data depending on versions of MQ.

Using MQ API Exits

Available in all versions of distributed MQ, MQ API Exits can be used to capture information as it flows through the MQ environment.  When an application is invoked, the queue manager will pass information about the application to the exit program.  The exit program will look at the application call and data to decide what to do with this information. This procedure allows us to track information as it flows through the application environment and across the sending and receiving channels to multiple queue managers (distributed and mainframe)

Tracking Using Application Activity Trace

With MQ 7.1 and above, the queue manager can be configured to generate the tracking events.  The MQ appliance uses this method exclusively.  The data collected is the same as when using MQ Exits.  The activity trace method has some advantages over the MQ Exit approach including no need to install code into the queue manager, easier to enable and disable and easy to setup for remote access.  But it currently supports limited filtering on the host MQ server which can mean potential for increased network traffic.

Independent of which method is used, the tracking events provide the information need to see inside of the MQ processing.

Managed File Transfer (MFT/FTE)

Many customer are currently leveraging managed file transfer into their applications to integrate files with MQ flows and broker (IIB).  The MFT coordinator publishes tracking events to record this activity. This allows you to see the transfer and any failures.

Summary

As noted in the introduction, the goal is a combined flow across the environment.  You need visibility into one or more of the technologies such as browsers, mobile, DataPower, Broker, Message File Transfer, MQ, Java applications and many more.

Nastel AutoPilot TransactionWorks analyzes this tracking information, interprets the data and produces the global transaction mapping.  When the events track across all of these technologies, we can provide a complete picture of the application flow.  through multiple environments.

Read Part 1 in this 2 part series: “3 Ways to Solve App Performance Problems with Transaction Tracking”.

To learn more, watch the TechTalk here

3 Ways to Solve App Performance Problems with Transaction Tracking

3 Ways to Solve App Performance Problems with Transaction TrackingTransaction Tracking can provide tremendous insight into the performance and behavior of applications. However, this can be challenging when transactions traverse platforms and technologies.  Often it can be similar to tracking someone in the old Westerns where you follow the trail to the river and then lose track of where they went next.  Tracking MQ transactions can have this same hurdle to overcome with MQ running on diverse platforms spanning multiple locations.  MQ transactions typically interact with other platforms such as IBM Integration Bus (Broker) and IBM DataPower.  Visualizing a dynamic flow of transactions across all of these environments is well worth the effort as it greatly simplifies the problem detection process at the same time reducing the mean time to resolve problems (MTTR).

Concepts of Transaction Tracing

Package tracking is an analogy that can be used to explain the concepts behind transaction tracking.  A package is sent from location A to location B with tracking notices generated to let the sender know where the package is in-transit, when the expected time of arrival is and when it actually arrived.    Package Tracking is a combination of disjoint technologies, similar to the middleware environment.  The process of transaction tracking can be complex with cost and timeliness of delivery of major concern.  No matter how fast you deliver a package, someone always wants it quicker. The same problems affect MQ transactions, but instead of a package, it is a message that never seems to get to its destination fast enough.

There are a set of common questions users have about package tracking.  For the customer it might include:   where is my package or is my package progressing as planned? If you work for the shipping company, you are more concerned about: where the bottlenecks are in the system, how to solve a problem, stopping a problem from recurring and what issues can I expect tomorrow. As a technician, you would want to know where your failures are occurring and where you can make improvements.

Package tracking involves: delivering   packages, scanning them and exporting the events from the scanners into a database for later analysis.  The key to tracing anything is to create tracking events that capture the key events such a pick, pack or ship and what time these occurred

Transaction Tracking for MQ

There are a common set of  behavior patterns for MQ.  Each set of patterns can be unique.  Typically, you have senders and receivers as well as queues being processed with one or more queue managers communicating with each other. With multiple applications running on different servers, as in the package tracking example, every time a message gets sent or received, the details about that message and its processing should be captured and sent  to a central location.  This will provide the ability to understand what is occurring. If the transaction is stuck or slow, we know that we can react to it or produce warnings if the transaction takes too long.  We can also gather statistics along the way to see the duration of each step.  Capturing raw metrics about message flows and then correlating them together into a big picture can help the user solve performance problems faster.

Whether you are a corporate manager, Line of Business owner, application support group or IT infrastructure team, you need end-to-end visibility into the transactions that are relevant to you.

Stay tuned for the next installment in this series, “3 Ways to Solve App Performance Problems with Transaction Tracking”.

 

To learn more, watch the TechTalk here

Transformation of Real User Monitoring Tools in the Industry

Transformation of Real User Monitoring Tools in the IndustryWith online viewership and sales growing rapidly, enterprises are interested in understanding how they analyze performance to positively impact business metrics. Deeper insight into the user experience is needed to understand why conversions are dropping and/or bounce rates are increasing or, preferably, to understand what has been helping these metrics improve.

The digital performance management industry has evolved as application performance management companies have broadened their scope beyond synthetic testing that simulates users loading specific pages at regular intervals to include web and mobile testing, and real user monitoring (RUM).  As synthetic monitoring gained popularity, performance engineers realized the variations that exist from real end users were not being captured. This led to the introduction of RUM – the process of capturing, analyzing and reporting data from a real end user’s interaction with a website. RUM has been around for more than a decade, but the technology is still in its infancy.

What features should you look for in a RUM solution?
Knowing that you need a RUM solution is the first step.   The second step is identifying what features are required to meet your business needs.  With a variety of solutions available in the market, identifying the must-have and the nice-to-have features is important to find the best fit.

Real-time and actionable data
Most RUM tools display insights in the dashboard for the user in near real-time.  This information can be coupled with near real time tracking information from business analytics tools like Google Analytics. Performance data from RUM solutions should be cross-checked against metrics such as site visits, conversions,user location and device/browser insights. Many website operators continuously monitor any changes in the business metrics since they are indicative of problems in performance; further, it enables them to minimize false positives or isolated issues in performance.

 

View Source

Why DevOps Transformation Produces Happiness

Why DevOps Transformation Produces HappinessWhen reading the many articles on DevOps transformation online, it seems the only enjoyable element is the end result: the nirvana of Continuous Delivery. However, while dismantling and reconstructing the development lifecycle may seem a daunting task, the challenges involved actually satisfy three basic tenets for happiness if we embrace disruption.

According to the award-winning documentary Happy, a worldwide study found the same common thread running through jovial people everywhere, whether in the developed world or rural isolation.

Here are the three main sources of happiness and how we can achieve them through a DevOps transformation:

1. Personal Growth
The cross-pollination of skills between developers and operations gives the opportunity to learn. When working in silos on tasks that have become repetitive and habitual, it is natural to become narrow-minded. A DevOps transformation challenges staff and promotes rapid self-improvement.

If we get a better understanding of the roles of others in the team, we improve our knowledge of how our role fits into the broader ecosystem. Personal growth also comes from working well in a team and helping others.

2. A Sense of Community
Often work relationships are confined to polite conversations if we do not have reason to regularly engage. Learning different roles within the application lifecycle increases empathy between staff members.

Getting to know colleagues on a personal level helps cultivate a relaxed working environment and increases communication. With DevOps, a successful team is measured only by release velocity, and this reflects how well the team collaborates.

3. The Opportunity to Help Others
Trust builds mutual respect that promotes the transferal of knowledge. The combination of individual skills and the building of community in DevOps means you will give training and advice. This selfless act is often very rewarding.

 

 

View Source

4 Key Benefits from Using Self-Service for IBM MQ – Part 2 of 3

4 Key Benefits from Using Self-Service for IBM MQ - Part 2 of 3[This is Part 2 of a 3 part series. Catch-up with part 1, here.]

Drivers for MQ Self-Service

In Part I, we discussed the extensive interest in MQ Self-Service.  This interest is due to a number of factors, including:  the shrinking size of middleware staff, growing workloads and increasing application complexity.

At the same time application complexity rises, the demand for MQ access grows accordingly.  The number of  application developers,  IT support and operations personnel needing access to MQ is increasing and they all come to the middleware group to get help.

There are a variety of use cases that are common within most enterprises. Understanding the typical business requirements to reduce support costs and stakeholder necessities for increased visibility, message browsing and the taking of actions is essential in providing   an effective self-service system.

Typical Requirements for MQ Self-Service  

  • Visibility Anywhere: View queue status and depth, channel usage via web
  • Testing: Examine queues, channels, queue managers, and subscriptions
  • Forensics: Browse and manipulate application messages
  • Action: Act on application specific messages (move, copy, edit, route, replay, create)

Crafting an Effective Self-Service Solution

How do you go about crafting an effective self-service solution for IBM MQ?  Many organizations use IBM’s MQ Explorer. After all, why not? It comes out of the box with the product so that is certainly an option. The product has all the characteristics that you need to manage and view in the MQ environment; however, it can be challenging to use for problem diagnosis.  Some of the requirements when using MQ Explorer do not meet the objectives that we identified:

MQ Explorer is lacking:

  • Simplicity: You need to install an Eclipse client and set the appropriate security level to give access. This may end up exposing the complexity of MQ requiring tool users to have a solid understanding of MQ or they will be lost. There will be difficulties to enable non-specialists to complete their tasks.
  • Scalability: Trying to roll out the MQ Explorer to hundreds or thousands of users is challenging for most organizations as it is a manual task.
  • Security & Audit: You end up giving people more capability that you want to give them. Users can potentially see and do more than what is needed. This can be dangerous.

The Better Approach to Self Service

First, start off with a self-service monitoring dashboard which provides stakeholders a business view of MQ:

  • Activity
  • Availability
  • Performance

Teams acquire an end-to-end view of application flows through all the moving parts that make up a workflow.

Next, provide users with real-time application visibility for instant awareness of performance problems. Standard web-enabled dashboards do not typically supply this.  Users will have the capabilities to understand what is happening within MQ as they need it to understand their situation. Problem resolution time shrinks, too.  When a problem occurs, instead of calling up the middleware team to say something like “I think MQ is broken,” the user can now describe the issue that they are experiencing and place it into a business context for rapid remediation.

Then, provide deep-dive visibility. Many users do not have this insight into how MQ impacts application performance & behavior.  This approach to MQ self-service is very empowering for the user as it enables them to better understand how middleware behaves.  Stakeholders get the opportunity to participate actively and proactively diagnose situations where issues might occur. In return, this helps the team prevent situations from reoccurring.  Once deep visibility is provided for stakeholders, productivity improves.

Finally, we come to taking action. When talking about self-service, we are not merely considering how users view objects.  We are also covering how users take action to improve the situation.  Make it simple for users to understand the necessary procedures that are available to them. Help them choose the right action to perform, through effective communication in a format that is brief, easy to understand and one that enables a quick user response.

To learn more, read Part 3 of this 3 part series, “4 Key Benefits from using Self-Service for IBM MQ” and learn how users can take actions when provided with a graphical historical view on middleware performance. Find out what the key most important metrics are, how to interpret the metrics and when to invoke actions.

For more information on how you can improve productivity, increase speed of delivery to customers and reduce costs, watch the TechTalk Boost Productivity using Self-Service for IBM MQ!

4 Key Benefits from Using Self-Service for IBM MQ” Part 1 of 3

4 Key Benefits from Using Self-Service for IBM MQ" Part 1 of 3The concept of self-service has evolved over many years.  It has led to very important innovations and technology in many industries in the way we work and live.  Until the earliest 20th century, people who went shopping were entirely dependent on clerks.  They would go to a store and give a list of items that they needed to the clerk and those people selected their bits for them. Naturally shopping has changed dramatically with innovations such as supermarkets, malls and even today, internet shopping. When the first ATMs were introduced, there was a lot of fear in the part of banking organizations that customers would miss human interaction with their bank tellers. That fear went away when it became clear that these machines were a huge success.

The Benefits to Self-Service:

Human Empowerment: Users with self-service systems are able to do things for themselves that previously required help from a specialists.

Increased Efficiency: People are now able to do more with less by delegating some activities to users.  This enables a more economical use of resources.

Improved Productivity: With self-service, the specialists that we rely upon are now free to perform other tasks that deliver greater value to the organization. Also, user wait time has decreased.

Reduced Costs: Time consuming tasks that were previously performed by specialists are now delegated to users which has a great impact on reducing costs.

Essential Design Criteria for Self-Service:

Despite all of these benefits, there is an essential design criteria that we cannot forget about when talking about self-service.  You want to provide the benefits of ease of use to your end-users but the first and foremost criteria is protection and the well-being of the user.

Safety: All self-service systems focus primarily on protecting the underlying system of many issues that can either intentionally or unintentionally be created by the non-specialist. Protecting the integrity of the underlying system while still delivering the self-service benefit to the user.

Security:  Only the users can do what they are authorized to do.  Automatic teller machines and atms are the best examples.

Simplicity: Self-service users may have little or no training so the users have to be intuitive and must guide the users to do the right actions.

Scalability: The Self Service system has to be able to handle an increasing volume of the consumers. Self-service often leads to a high level of adoption to users than what was originally anticipated when these systems were put into use.

Self Service System for IBM MQ

A self-service system for IBM MQ has a number of potential stakeholders.  Naturally, these are the people in the middleware team but there are also other groups to consider that are involved with MQ monitoring as well.

Middleware team: Focused on proactive management of messaging middleware. They want to manage their environment

Application Support: Interested in faster time to repair (MTTR).  They want to identify the root cause of performance issues

Application Developers: Interested in continuous quality improvement of the new releases of the applications.

Enterprise Architects and Application Owners: Interesting in processing improving and reduce costs. They want to prevent performance problems from happening.  They also want to monitor their applications from end to end.

Application support, DevOps, and operations teams can have direct access to WMQ, test messages in development and quickly find the root-cause of production problems—without needing to call the Middleware Team. They do not need to know the internal mechanics of MQ in order to do their jobs effectively so their understanding of the internal mechanics of MQ is relatively limited. Since most of these stakeholders are not specialists in MQ, it is not surprising if they frequently contact their middleware team with inaccurate observations or questions such as: “MQ is broken, can you fix it?”, “MQ is slow,”  “I need a new queue so I can do some testing,” or “I need to be able to run tests.”  They can also rely on the MQ Specialists, the middleware team to address their issues if need be.

A fundamental ingredient to a successful self-service implementation is the act of delegating a specific set of selected activities to a broader group of people.  Middleware Teams can leverage easy-to-implement technology to empower their colleagues in application support, DevOps, and operations and also save themselves a boatload of time. Understanding the common requirements and user demands of these stakeholders is the key to providing them with an effective self-service system.

To learn more, read Part 2 of this 3 part series, “4 Key Benefits from using Self-Service for IBM MQ” and learn why there is so much interest in the self-service topic and typical user requirements for self-service for MQ.

For more information on how you can improve productivity, increase speed of delivery to customers and reduce costs, watch the TechTalk Boost Productivity using Self-Service for IBM MQ!

Four Tips for Improving Application Performance Management

Four Tips for Improving Application Performance Management

This week, Logicalis US released a list of four practices they suggest IT professionals adopt as part of an application performance management scheme.

  1. Set a measurable baseline: Many organizations lack an empirical benchmark for how an application should perform. As a result, too many rely on human perception or the number of support calls to deduce that an application is not performing as desired. Logicalis experts recommend setting a baseline for application performance that is based on data and analytics.
  2. Shorten the time to resolution: Quickly identifying the root cause of the problem is essential but can be challenging. Often, the various component monitoring solutions in a network are not integrated. Network monitoring should begin at the end-user perspective and work back through the infrastructure, Logicalis experts said. The goal is to anticipate potential performance problems before they occur, allowing for proactive or automated remediation of issues.
  3. Employ Dev-Ops: A Dev-Ops strategy and related tooling can provide an organization with applications that are optimized for digital environments, helping to avoid performance problems in the first place. In-house coding can mean fewer defects and support issues down the line.
  4. Report on performance: Service providers should combine the benchmarking information with data about the supporting technologies, including the network, servers, storage, tuning and remediation procedures, with an eye toward continually improving performance and meeting service level agreements.

.

Read the source article at MSPmentor

The 3 Compliance Questions to Ask

The 3 Compliance Questions to Ask

As companies move to cloud, they require more certainty around export compliance.

Of the many complexities associated with cloud computing, export compliance laws arguably are some of the thorniest. From a legal and technical perspective, the export compliance laws currently on the books—as they vary from country to country—can make even the savviest and most experienced attorneys’ and engineers’ heads spin.

All enterprises must adhere to a variety of industry- and country-specific rules related to important security, data privacy, taxation and export controls. But these rules become especially murky around cloud services. For example, if a U.S.-based company provisions a virtual machine abroad, say in China, does it need to develop region-specific export controls?

Export compliance rules raise other, broader questions. For example, how do you retain agility while complying with the necessary regulations? And how do those regulations and controls vary according to workload? Like tax regulations, rules for collecting and distributing user data vary depending on location.

Not having the proper compliance protocols in place can have serious implications. Say your client is expanding into a foreign market and, at the last minute, they request a number of changes that have not been evaluated from a compliance perspective. Either the expansion is delayed, which could be damaging from a reputation and financial perspective, or the company runs the risk of being cited for compliance violations.

So, as more and more companies expand globally, how can they prepare to meet the compliance challenges stemming from cloud computing?

Read the source article at devops.com
Original Author: Contributor

To Move Fast on Cloud Computing, Go Slow

To Move Fast on Cloud Computing, Go Slow

The mad rush to adopt cloud technology is no secret. In fact, the tsunami of organizations that are racing to implement these solutions clearly reflects the need to have the best new “it” in IT. As TJ McCue wrote, “Instead of a slow-moving, fluffy white cloud image, the cloud computing industry should use a tornado – that might be a better way to visualize how fast cloud computing is growing today.” In fact, the global market for cloud-computing equipment is predicted to reach almost $80 billion by 2018.

The exceptional benefits and transformative power of the cloud are clear – efficiency, productivity, scalability, storage capacity, and better use of analytics, just to name a few. But, as with all revolutionary solutions, the urgency among market leaders to introduce greater agility into their organizations, to “be fast and be first,” can lead to real buyer’s remorse over designing and deploying precisely the wrong set of cloud solutions for their organizations.

 

Read the source article at Data Informed
Original Author: Jim Cole