Managing Application Performance in Hybrid Clouds

Managing Application Performance in Hybrid Clouds

Managing Application Performance in Hybrid Clouds: Keeping pace with changes to your networks, systems and applications can feel like a full-time job. As applications support more data types in more distributed environments—often in real time—it becomes imperative to have a clear picture of your end-to-end computing environment.

Just as important is making sure your apps take advantage of the adaptability and efficiency of microservices, document data models and other cloud-focused technologies.

All those much-touted benefits of cloud computing—efficiency, agility, scalability—come to nothing if your applications don’t perform as expected. The growing prevalence of hybrid cloud infrastructures makes managing app performance trickier than ever.

Read the source article at devops.com
Original Author: Brian Wheeler

Nastel Announces AutoPilot Insight Real-User Monitoring

AutoPilot Insight Real-User MonitoringNastel Technologies announced the addition of AutoPilot Insight Real-User Monitoring and analytics to its flagship AutoPilot Insight software platform.

According to Charley Rich, Nastel’s VP-Product Management, “Slow Web apps are a terrific way to kill revenues, harm reputations and drive users to competitors. The problem is, even as traditional datacenter performance metrics say everything is fine, users are tapping their fingers with impatience because of sub-standard app responsiveness.

“AutoPilot’s new capabilities handle exactly this kind of situation, and can automatically pinpoint the source of problems that hurt a company’s reputation with its client base,” he said. “Basically, we capture and analyze two very different sets of data: the subjective user experience of fast or sluggish app responsiveness, and back-end server activities. Our secret sauce is being able to stitch together both data sets, analyze it, and deliver actionable insights to correct performance issues whenever and wherever they occur.

“The key to making real-user monitoring easy to deploy,” Rich continued, “is the use of browser-injection technology. So in addition to the detailed web and server metrics one would expect, our software enables clients to track end-user activities across geo-locations, and it automatically understands and visually depicts the relationship between application topologies and end-user requests.”

The ability to synthesize insights derived from topology mapping, server behaviors, and user requests—along with presenting probable root causes of problems in an intuitive visual manner—translates to reduced mean-time-to-repair (MTTR) of software issues and lower overall cost of support.

Whether a problem’s root cause is a JavaScript error on the client, network latency, or a slow Java method, AutoPilot Insight’s interface takes specialists to underlying problem issues with the press of a button. Detailed drill-down capabilities are provided in addition to single-click root-cause analysis.

AutoPilot Insight also stands apart from other solutions by offering natural language query capability that enables IT specialists to “talk” to data, enabling the detection of subtle, hidden patterns that enable solution of the toughest, most intractable performance problems.

Available key metrics include a full breakdown of page requests into all its components, browser-specific issues, geo-locations, top requests, worst response times, slowest loading pages, slowest server connections and much more.

“AutoPilot Insight,” Rich concludes, “is a unified solution that analyzes user requests, logs, metrics and transactions spanning the browser, web apps, middleware, brokers and mainframes. With this end-to-end measurement of performance you will rest easy that your users are satisfied and your company’s reputation is secure.”

Read the source article at APMdigest
Original Author: Pete Goldin

APM’s Service-Oriented Recipe for Success

Nastel Comments: A lot of companies need the capabilities to capture and monitor end-user experience for web-based applications as part of its overall delivery of business transaction monitoring to perform the following:

-Capturing real-time, operational performance metrics
-Measuring technology components that impact the end user

APM's Service-Oriented Recipe for Success

Many of us are grappling with the modern demands of digital business: developing new mobile apps, evaluating security in the face of IoT, moving to hybrid clouds, testing approaches to defining networks through software. It’s all part of the hard trend toward service-oriented IT, with a primary goal of delivering a premium user experience to all your users—internal, partner or customer—with the speed, quality and agility the business demands.

How do you meet these elevated expectations? As modern data centers evolve rapidly to tackle these agility demands, network and application architectures are becoming increasingly complex, complicating efforts to understand service quality from infrastructure and application monitoring alone. Virtualization can obscure critical performance visibility at the same time complex service dependencies challenge even the best performance analysts and the most effective war rooms. Although this situation may read like a recipe for disaster, within are secrets to success.

Service Quality Is in the Eye of the End User

Remember the adage “beauty is in the eye of the beholder”? The same idea applies here; service quality is in the eye of the user. It’s hard to argue with that sentiment, especially when we consider the user as the face of the business. So, of course, to understand service quality we should be measuring end-user experience (EUE), where EUE is defined as the end-user response time or “click to glass.” In fact, EUE visibility has become a critical success factor for IT service excellence, providing important context to more effectively interpret infrastructure performance metrics.

  • These agent-based solutions may be unavailable or unsuitable for operations teams
  • Not all Java and .NET apps will be instrumented
  • Some agent-based solution do not measure EUE
  • Some agent-based solutions only sample transaction performance (let’s call this some user experience, or SUE)
  • Many application architectures don’t lend themselves to agent-based EUE monitoring

An Important Lesson

For these and other reasons, IT operations teams have often focused on more approachable infrastructure monitoring—device, network, server, application and storage—with the implication that the whole is equal to the sum of its parts. The theory was (or still is) that by evaluating performance metrics from all of these components, one could assemble a reasonable understanding of service quality. The more ambitious IT teams combine metrics from many disparate monitoring solutions into a single console, perhaps with time-based correlation if not a programmed analysis of cause and effect. We might call such a system a manager of managers (MOM), or business service management (BSM). Some still serve us well, likely aided by a continual regimen of care and feeding; still more have faded from existence. But we have learned an important lesson along the way—namely, EUE measurements are critical for IT efficiency for many reasons, such as

  • Knowing when there is a problem that affects users
  • Prioritizing responses to problems on the basis of business impact
  • Avoiding chasing problems that don’t exist, or deprioritizing those that don’t affect users
  • Troubleshooting with a problem definition that matches performance metrics
  • Knowing when (or if) you’ve actually resolved a problem

Complexity Drives APM Evolution

Performance-monitoring capabilities continue to mature, evolving from real-time monitoring and historical reporting to more sophisticated fault-domain isolation and root-cause analysis, applying trending or more-sophisticated analytics to predict, prevent or even take action to correct problems.

One of the compelling drivers is the increasing complexity—of data center networks, application-delivery chains and application architectures. And with this complexity comes an increasing volume of monitoring data stressing, even threatening, current approaches to operational performance monitoring. It’s basically a big-data problem. And in response, IT operations analytics (ITOA) solutions are coming to market as an approach to derive insights into IT system behaviors—including but not limited to performance—by analyzing generally large volumes of data from multiple sources. The ITOA market insights from Gartner tell an interesting story: spending doubled from 2013 to 2014 to reach $1.6 billion, while estimates suggest that only about 10% of enterprises currently use ITOA solutions. That’s a lot of room for growth!

Read the source article at datacenterjournal.com

Five Security Features That Your Next-Gen Cloud Must Have

Nastel Comments: Cloud computing demands a high degree of automation from an application performance management (APM) / business transaction management (BTM) solution in order to deliver the visibility that users require.

An APM / BTM solution must adjust what and where it is monitoring in order to keep pace with the elastic configuration of this ever-changing environment and deliver the promised return on investment that cloud users expect. Manual efforts to specify where the applications are, the dependencies between transactions and the status of services will not be effective.
Five Security Features That Your Next-Gen Cloud Must Have

With cloud computing, virtualization, and a new type of end-user – the security landscape around the modern infrastructure needed to evolve. IT consumerization and a lot more data within the organization has forced security professionals to adopt better ways to protect their environment. The reality is that standard firewalls and UTMs are just no longer enough. New technologies have emerged which can greatly enhance the security of a cloud and virtualization environment – without impacting performance. This is where the concept of next-generation security came from.

It was the need to abstract physical security services and create logical components for a powerful infrastructure offering.

With that in mind – let’s look at five great next-gen security features that you should consider.

  1. Virtual security services. What if you need application-level security? What about controlling and protecting inbound, outbound, and intra-VM traffic? New virtual services can give you entire virtual firewalls, optimized anti-virus/anti-malware tools, and even proactive intrusion detection services. Effectively, these services allow for the multi-tenant protection and support of network virtualization and cloud environments.
  2. Going agentless. Clientless security now directly integrates with the underlying hypervisor. This gives your virtual platform the capability to do fast, incremental scans as well as the power to orchestrate scans and set thresholds across VM’s. Here’s the reality – you can do all of this without performance degradation. Now, we’re looking at direct virtual infrastructure optimization while still maintaining optimal cloud resource efficiency. For example, if you’re running on a VMware ecosystem, there are some powerful “agentless” technologies you can leverage. Trend Micro’s Deep Security agentless anti-malware scanning, intrusion prevention and file integrity monitoring capabilities help VMware environments benefit from better resources utilization when it comes to securing VMs. Further, Deep Security has been optimized to support the protection of multitenant environments and cloud-based workloads, such as Amazon Web Services and Microsoft Azure.
  3. Integrating network traffic with security components. Not only can you isolate VMs, create multi-tenant protection across your virtual and cloud infrastructure, and allow for application-specific protection – you can now control intra-VM traffic at the networking layer. This type of integration allows the security layer to be “always-on.” That means security continues to be active even during activities likes a live VM migration.
  4. Centralized cloud and virtual infrastructure management/visibility. Whether you have a distributed cloud or virtualization environment – management and direct visibility are critical to the health of your security platform. One of the bestthings about next-generation security is the unified visibility the management is capable of creating. Look for the ability to aggregate, analyze and audit your logs and your entire security infrastructure. Powerful spanning policies allow your virtual infrastructure to be much more proactive when it comes to security. By integrating virtual services (mentioned above) into the management layer – administrators are able to be proactive, stay compliant, and continuously monitor the security of their infrastructure.
  5. Consider next-gen end-point security for your cloud users. There are some truly disruptive technologies out there today. Here’s an example: Cylance. This security firm replaces more traditional, signature-based, technologies with a truly disruptive architecture. Basically, Cylance uses a machine-learning algorithm to inspect millions of file attributes to determine the probability that a particular file is malicious. The algorithmic approach significantly reduces the endpoint and network resource requirement. Because of its signature-less approach, it is capable of detecting both new threats and new variants of known threats that typically are missed by signature-based techniques. Here’s the other really cool part – even when your users disconnect from the cloud, they’re still very well protected. Because the Cylance endpoint agent does not require a database of signatures or daily updates, and is extremely lightweight on network, compute, and data center resources – it can remain effective even when disconnected for long periods.

Read the source article at Web Host Industry Review
Original Author: thewhir

Application Performance Management software in hot demand

Nastel Comments:  Looking into Nastel’s APM solutions can be beneficial for providing the visibility, prediction and performance you need to proactively maintain the performance of complex applications from the datacenter to the cloud. Enterprises are provided with deep dive visibility into the root-cause of problems along with real-time analytics that reduces false-positives and delivers warnings about problems before users are impacted.

Application Performance Management software in hot demand

A rising demand for cloud-based Application Performance Management (APM) software is lifting the performance and availability of management software, according to Technavio. In fact, the analysts have forecast the global distributed performance and availability management software market to grow at a CAGR of more than 13% during the forecast period. Technavio ICT analysts highlight the following four factors that are contributing to the growth of the global distributed performance and availability management software market:

  • Rising demand for cloud-based APM software
  • Increased need to enhance business productivity
  • Greater need for visibility into business processes
  • Reduced operational costs of distributed performance and availability management software

“As enterprises move enterprise applications to the cloud, the need for managing and monitoring the performance of applications across a distributed computing environment becomes important. As a result, the demand for cloud-based APM software is increasing,” says Amrita Choudhury, a lead analyst at Technavio for enterprise application.

Read the source article at ChannelLife NZ

Real-Time Analytics for Operational Insight

Nastel Comments: Real-Time Analytics, Monitoring and Tracking is essential to gather the right data and analyze it appropriately in order to continuously innovate, improve and keep your customers happy.

Real-Time Analytics for Operational Insight

Watch this TechTalk and discover:

• Roughly 25% of DevOps professionals surveyed recently by IDC report a need for analytics.

• What are the benefits of real-time data? How can users analyze perishable data while it still matters?

• What measurable improvements can be made through real-time streaming analytics?

Read the source article at library.nastel.com

Cohesion Critical in a Successful DevOps Team

Nastel Comments: One factor that is often overlooked in DevOps discussions is the fact that people and skills aren’t enough. In today’s fast-paced technology environments, processes and tools are equally important. The right tools enable specialists from a variety of technical backgrounds to find common ground by giving them a common view of the application they are supporting.

Cohesion Critical in a Successful DevOps Team

In the push to adopt DevOps among enterprise IT organizations, cohesion has been one of the biggest challenges in collaborating effectively. Historically, departments within a business existed in disparate locations, or silos. Marketing, IT and operations all had their own teams and their own communication channels, which made it difficult for members of separate teams to work successfully with one another despite the fact that they were working toward a common goal.

However, with the continuing adoption of DevOps, IT admins are finding that projects are easier and more well-executed when all the parts of the development cycle—creation, coding, testing and deployment—are working in tandem. Therefore, it’s becoming ever more critical that agile product teams are on the same page when it comes to their DevOps projects.

Read the source article at devops.com

Original Author: Miles Blatstein

Website Performance Testing Tips and Tools

Nastel Comments: Companies must do website performance testing and pay attention to its impact on the end-user. APM solutions can identify where application issues are impeding follow-through on transactions. Identifying that certain user action isn’t executing before the holiday rush significantly reduces the chances that multiple users will abandon their shopping baskets and take their business elsewhere when that same application doesn’t execute especially on high-volume days.

Website Performance Testing Tips and Tools

There’s nothing more frustrating than waiting for a website to load. In the new era of mobile internet usage, this complaint is shared by many. Studies regularly show that many people will abandon a site if it’s slow to load – the only thing that seems to change is that the time threshold keeps lowering.

That may be shocking to some but consumers no longer have the patience they once had. Aside from impacting user engagement, a sluggish site speed can quickly see you drop down the rankings. A common trait with those sites that offer a poor user experience, search engine bots now consider this when ranking a site.

The truth is, with today’s advances in technology, it’s unacceptable to offer a poor performing website. Whether you’re a small business or vast ecommerce empire, it’s essential to carry out performance tests periodically. We’ve selected 5 of the best tools to help you out.

Read the source article at Business 2 Community

Test Better, Test Faster, Test Smarter

Nastel Comments: Test smarter, the ability to identify problems sooner in the application life cycle will yield better results when the need to remediate issues arises. This can only happen when development and production are working together as a team, utilizing a common tool set, and when development is enabled with full visibility. This approach will save time and money as well as help organizations meet SLAs and drive ROI from these applications.

Test Better, Test Faster, Test Smarter - DevOps

How do you balance the need to “go fast” with the need to test everything and deliver high-quality software? With applications the driving force in today’s economy, the quality and release cadence of your software are critical to your business and your bottom line. You want to get software updated in the hands of your […]

Ask yourself, What are you trying to achieve? Is it a process where you can release code more quickly? Or where you can address bugs more readily? Then make sure you’re intimately aware of what you have in place. You need to know every step, task, process and tiny bit of your testing procedures and supporting infrastructure. Draw your entire end-to-end workflow on a whiteboard and find the bottlenecks that are slowing it down.

Read the source article at devops.com

Original Author: Contributor

How to create speedier infrastructure for an app-centric business

How to create speedier infrastructure for an app-centric business

Time is money’ – within business this phrase could not be more relevant, particularly in today’s well-connected society. As everything progresses in technology – becoming much more efficient, smarter and faster – people’s expectations are constantly growing, and so they end up frustrated when something hinders the process. When it comes to a network failure or terrible Internet connection in business – slow is the new broke.

Read the source article at Information Age