Transformation of Real User Monitoring Tools in the Industry

Transformation of Real User Monitoring Tools in the IndustryWith online viewership and sales growing rapidly, enterprises are interested in understanding how they analyze performance to positively impact business metrics. Deeper insight into the user experience is needed to understand why conversions are dropping and/or bounce rates are increasing or, preferably, to understand what has been helping these metrics improve.

The digital performance management industry has evolved as application performance management companies have broadened their scope beyond synthetic testing that simulates users loading specific pages at regular intervals to include web and mobile testing, and real user monitoring (RUM).  As synthetic monitoring gained popularity, performance engineers realized the variations that exist from real end users were not being captured. This led to the introduction of RUM – the process of capturing, analyzing and reporting data from a real end user’s interaction with a website. RUM has been around for more than a decade, but the technology is still in its infancy.

What features should you look for in a RUM solution?
Knowing that you need a RUM solution is the first step.   The second step is identifying what features are required to meet your business needs.  With a variety of solutions available in the market, identifying the must-have and the nice-to-have features is important to find the best fit.

Real-time and actionable data
Most RUM tools display insights in the dashboard for the user in near real-time.  This information can be coupled with near real time tracking information from business analytics tools like Google Analytics. Performance data from RUM solutions should be cross-checked against metrics such as site visits, conversions,user location and device/browser insights. Many website operators continuously monitor any changes in the business metrics since they are indicative of problems in performance; further, it enables them to minimize false positives or isolated issues in performance.

 

View Source

Why DevOps Transformation Produces Happiness

Why DevOps Transformation Produces HappinessWhen reading the many articles on DevOps transformation online, it seems the only enjoyable element is the end result: the nirvana of Continuous Delivery. However, while dismantling and reconstructing the development lifecycle may seem a daunting task, the challenges involved actually satisfy three basic tenets for happiness if we embrace disruption.

According to the award-winning documentary Happy, a worldwide study found the same common thread running through jovial people everywhere, whether in the developed world or rural isolation.

Here are the three main sources of happiness and how we can achieve them through a DevOps transformation:

1. Personal Growth
The cross-pollination of skills between developers and operations gives the opportunity to learn. When working in silos on tasks that have become repetitive and habitual, it is natural to become narrow-minded. A DevOps transformation challenges staff and promotes rapid self-improvement.

If we get a better understanding of the roles of others in the team, we improve our knowledge of how our role fits into the broader ecosystem. Personal growth also comes from working well in a team and helping others.

2. A Sense of Community
Often work relationships are confined to polite conversations if we do not have reason to regularly engage. Learning different roles within the application lifecycle increases empathy between staff members.

Getting to know colleagues on a personal level helps cultivate a relaxed working environment and increases communication. With DevOps, a successful team is measured only by release velocity, and this reflects how well the team collaborates.

3. The Opportunity to Help Others
Trust builds mutual respect that promotes the transferal of knowledge. The combination of individual skills and the building of community in DevOps means you will give training and advice. This selfless act is often very rewarding.

 

 

View Source

The Best Ways to Solve DataPower Issues that Impact Applications – Part 4 of 4

IBM Datapower[This is Part 4 in a 4 part series. Catch-up with part 1, here.]

Converting DataPower metrics and events into actionable intelligence  

DataPower appliances have several management API’s and interfaces providing detailed information about system operations and performance. By using these interfaces, we can capture a very broad range of configuration and status data. Continue reading

Four Tips for Improving Application Performance Management

Four Tips for Improving Application Performance Management

This week, Logicalis US released a list of four practices they suggest IT professionals adopt as part of an application performance management scheme.

  1. Set a measurable baseline: Many organizations lack an empirical benchmark for how an application should perform. As a result, too many rely on human perception or the number of support calls to deduce that an application is not performing as desired. Logicalis experts recommend setting a baseline for application performance that is based on data and analytics.
  2. Shorten the time to resolution: Quickly identifying the root cause of the problem is essential but can be challenging. Often, the various component monitoring solutions in a network are not integrated. Network monitoring should begin at the end-user perspective and work back through the infrastructure, Logicalis experts said. The goal is to anticipate potential performance problems before they occur, allowing for proactive or automated remediation of issues.
  3. Employ Dev-Ops: A Dev-Ops strategy and related tooling can provide an organization with applications that are optimized for digital environments, helping to avoid performance problems in the first place. In-house coding can mean fewer defects and support issues down the line.
  4. Report on performance: Service providers should combine the benchmarking information with data about the supporting technologies, including the network, servers, storage, tuning and remediation procedures, with an eye toward continually improving performance and meeting service level agreements.

.

Read the source article at MSPmentor

APM’s Service-Oriented Recipe for Success

Nastel Comments: A lot of companies need the capabilities to capture and monitor end-user experience for web-based applications as part of its overall delivery of business transaction monitoring to perform the following:

-Capturing real-time, operational performance metrics
-Measuring technology components that impact the end user

APM's Service-Oriented Recipe for Success

Many of us are grappling with the modern demands of digital business: developing new mobile apps, evaluating security in the face of IoT, moving to hybrid clouds, testing approaches to defining networks through software. It’s all part of the hard trend toward service-oriented IT, with a primary goal of delivering a premium user experience to all your users—internal, partner or customer—with the speed, quality and agility the business demands.

How do you meet these elevated expectations? As modern data centers evolve rapidly to tackle these agility demands, network and application architectures are becoming increasingly complex, complicating efforts to understand service quality from infrastructure and application monitoring alone. Virtualization can obscure critical performance visibility at the same time complex service dependencies challenge even the best performance analysts and the most effective war rooms. Although this situation may read like a recipe for disaster, within are secrets to success.

Service Quality Is in the Eye of the End User

Remember the adage “beauty is in the eye of the beholder”? The same idea applies here; service quality is in the eye of the user. It’s hard to argue with that sentiment, especially when we consider the user as the face of the business. So, of course, to understand service quality we should be measuring end-user experience (EUE), where EUE is defined as the end-user response time or “click to glass.” In fact, EUE visibility has become a critical success factor for IT service excellence, providing important context to more effectively interpret infrastructure performance metrics.

  • These agent-based solutions may be unavailable or unsuitable for operations teams
  • Not all Java and .NET apps will be instrumented
  • Some agent-based solution do not measure EUE
  • Some agent-based solutions only sample transaction performance (let’s call this some user experience, or SUE)
  • Many application architectures don’t lend themselves to agent-based EUE monitoring

An Important Lesson

For these and other reasons, IT operations teams have often focused on more approachable infrastructure monitoring—device, network, server, application and storage—with the implication that the whole is equal to the sum of its parts. The theory was (or still is) that by evaluating performance metrics from all of these components, one could assemble a reasonable understanding of service quality. The more ambitious IT teams combine metrics from many disparate monitoring solutions into a single console, perhaps with time-based correlation if not a programmed analysis of cause and effect. We might call such a system a manager of managers (MOM), or business service management (BSM). Some still serve us well, likely aided by a continual regimen of care and feeding; still more have faded from existence. But we have learned an important lesson along the way—namely, EUE measurements are critical for IT efficiency for many reasons, such as

  • Knowing when there is a problem that affects users
  • Prioritizing responses to problems on the basis of business impact
  • Avoiding chasing problems that don’t exist, or deprioritizing those that don’t affect users
  • Troubleshooting with a problem definition that matches performance metrics
  • Knowing when (or if) you’ve actually resolved a problem

Complexity Drives APM Evolution

Performance-monitoring capabilities continue to mature, evolving from real-time monitoring and historical reporting to more sophisticated fault-domain isolation and root-cause analysis, applying trending or more-sophisticated analytics to predict, prevent or even take action to correct problems.

One of the compelling drivers is the increasing complexity—of data center networks, application-delivery chains and application architectures. And with this complexity comes an increasing volume of monitoring data stressing, even threatening, current approaches to operational performance monitoring. It’s basically a big-data problem. And in response, IT operations analytics (ITOA) solutions are coming to market as an approach to derive insights into IT system behaviors—including but not limited to performance—by analyzing generally large volumes of data from multiple sources. The ITOA market insights from Gartner tell an interesting story: spending doubled from 2013 to 2014 to reach $1.6 billion, while estimates suggest that only about 10% of enterprises currently use ITOA solutions. That’s a lot of room for growth!

Read the source article at datacenterjournal.com

Five Security Features That Your Next-Gen Cloud Must Have

Nastel Comments: Cloud computing demands a high degree of automation from an application performance management (APM) / business transaction management (BTM) solution in order to deliver the visibility that users require.

An APM / BTM solution must adjust what and where it is monitoring in order to keep pace with the elastic configuration of this ever-changing environment and deliver the promised return on investment that cloud users expect. Manual efforts to specify where the applications are, the dependencies between transactions and the status of services will not be effective.
Five Security Features That Your Next-Gen Cloud Must Have

With cloud computing, virtualization, and a new type of end-user – the security landscape around the modern infrastructure needed to evolve. IT consumerization and a lot more data within the organization has forced security professionals to adopt better ways to protect their environment. The reality is that standard firewalls and UTMs are just no longer enough. New technologies have emerged which can greatly enhance the security of a cloud and virtualization environment – without impacting performance. This is where the concept of next-generation security came from.

It was the need to abstract physical security services and create logical components for a powerful infrastructure offering.

With that in mind – let’s look at five great next-gen security features that you should consider.

  1. Virtual security services. What if you need application-level security? What about controlling and protecting inbound, outbound, and intra-VM traffic? New virtual services can give you entire virtual firewalls, optimized anti-virus/anti-malware tools, and even proactive intrusion detection services. Effectively, these services allow for the multi-tenant protection and support of network virtualization and cloud environments.
  2. Going agentless. Clientless security now directly integrates with the underlying hypervisor. This gives your virtual platform the capability to do fast, incremental scans as well as the power to orchestrate scans and set thresholds across VM’s. Here’s the reality – you can do all of this without performance degradation. Now, we’re looking at direct virtual infrastructure optimization while still maintaining optimal cloud resource efficiency. For example, if you’re running on a VMware ecosystem, there are some powerful “agentless” technologies you can leverage. Trend Micro’s Deep Security agentless anti-malware scanning, intrusion prevention and file integrity monitoring capabilities help VMware environments benefit from better resources utilization when it comes to securing VMs. Further, Deep Security has been optimized to support the protection of multitenant environments and cloud-based workloads, such as Amazon Web Services and Microsoft Azure.
  3. Integrating network traffic with security components. Not only can you isolate VMs, create multi-tenant protection across your virtual and cloud infrastructure, and allow for application-specific protection – you can now control intra-VM traffic at the networking layer. This type of integration allows the security layer to be “always-on.” That means security continues to be active even during activities likes a live VM migration.
  4. Centralized cloud and virtual infrastructure management/visibility. Whether you have a distributed cloud or virtualization environment – management and direct visibility are critical to the health of your security platform. One of the bestthings about next-generation security is the unified visibility the management is capable of creating. Look for the ability to aggregate, analyze and audit your logs and your entire security infrastructure. Powerful spanning policies allow your virtual infrastructure to be much more proactive when it comes to security. By integrating virtual services (mentioned above) into the management layer – administrators are able to be proactive, stay compliant, and continuously monitor the security of their infrastructure.
  5. Consider next-gen end-point security for your cloud users. There are some truly disruptive technologies out there today. Here’s an example: Cylance. This security firm replaces more traditional, signature-based, technologies with a truly disruptive architecture. Basically, Cylance uses a machine-learning algorithm to inspect millions of file attributes to determine the probability that a particular file is malicious. The algorithmic approach significantly reduces the endpoint and network resource requirement. Because of its signature-less approach, it is capable of detecting both new threats and new variants of known threats that typically are missed by signature-based techniques. Here’s the other really cool part – even when your users disconnect from the cloud, they’re still very well protected. Because the Cylance endpoint agent does not require a database of signatures or daily updates, and is extremely lightweight on network, compute, and data center resources – it can remain effective even when disconnected for long periods.

Read the source article at Web Host Industry Review
Original Author: thewhir

Application Performance Management software in hot demand

Nastel Comments:  Looking into Nastel’s APM solutions can be beneficial for providing the visibility, prediction and performance you need to proactively maintain the performance of complex applications from the datacenter to the cloud. Enterprises are provided with deep dive visibility into the root-cause of problems along with real-time analytics that reduces false-positives and delivers warnings about problems before users are impacted.

Application Performance Management software in hot demand

A rising demand for cloud-based Application Performance Management (APM) software is lifting the performance and availability of management software, according to Technavio. In fact, the analysts have forecast the global distributed performance and availability management software market to grow at a CAGR of more than 13% during the forecast period. Technavio ICT analysts highlight the following four factors that are contributing to the growth of the global distributed performance and availability management software market:

  • Rising demand for cloud-based APM software
  • Increased need to enhance business productivity
  • Greater need for visibility into business processes
  • Reduced operational costs of distributed performance and availability management software

“As enterprises move enterprise applications to the cloud, the need for managing and monitoring the performance of applications across a distributed computing environment becomes important. As a result, the demand for cloud-based APM software is increasing,” says Amrita Choudhury, a lead analyst at Technavio for enterprise application.

Read the source article at ChannelLife NZ

Real-Time Analytics for Operational Insight

Nastel Comments: Real-Time Analytics, Monitoring and Tracking is essential to gather the right data and analyze it appropriately in order to continuously innovate, improve and keep your customers happy.

Real-Time Analytics for Operational Insight

Watch this TechTalk and discover:

• Roughly 25% of DevOps professionals surveyed recently by IDC report a need for analytics.

• What are the benefits of real-time data? How can users analyze perishable data while it still matters?

• What measurable improvements can be made through real-time streaming analytics?

Read the source article at library.nastel.com

Website Performance Testing Tips and Tools

Nastel Comments: Companies must do website performance testing and pay attention to its impact on the end-user. APM solutions can identify where application issues are impeding follow-through on transactions. Identifying that certain user action isn’t executing before the holiday rush significantly reduces the chances that multiple users will abandon their shopping baskets and take their business elsewhere when that same application doesn’t execute especially on high-volume days.

Website Performance Testing Tips and Tools

There’s nothing more frustrating than waiting for a website to load. In the new era of mobile internet usage, this complaint is shared by many. Studies regularly show that many people will abandon a site if it’s slow to load – the only thing that seems to change is that the time threshold keeps lowering.

That may be shocking to some but consumers no longer have the patience they once had. Aside from impacting user engagement, a sluggish site speed can quickly see you drop down the rankings. A common trait with those sites that offer a poor user experience, search engine bots now consider this when ranking a site.

The truth is, with today’s advances in technology, it’s unacceptable to offer a poor performing website. Whether you’re a small business or vast ecommerce empire, it’s essential to carry out performance tests periodically. We’ve selected 5 of the best tools to help you out.

Read the source article at Business 2 Community

Monitoring as a Discipline and the Systems Administrator

Monitoring as a Discipline and the Systems Administrator

Ensure your organization leverages a monitoring tool that provides full stack visibility. It’s no secret that IT has traditionally functioned in siloes. IT professionals have disparately managed servers, storage and other infrastructure elements for decades. But today’s businesses run on software and applications, which utilize resources from the entire system: storage, server compute, databases, etc., which are all increasingly interdependent. IT professionals need to have visibility into the entire application stack in order to identify the root cause of issues quickly and proactively identify problems that could impact the end user experience and business bottom-lines if not corrected quickly

Read the source article at Data Center Knowledge