Performance Monitoring Will Need to Join the Crowd

Performance Monitoring Will Need to Join the Crowd

A number of applications or apps have been developed that rely on crowd-sourcing to work. It’s a change in architecture that is a natural progression of the current state of the art of software development and delivery; by component architecture on distributed infrastructure serving huge numbers of users. In any case, these apps must be high performing and provide a great user experience in order to grow and retain their users, otherwise their crowd sourced information will be just a few people hanging out, not a crowd. I use one of these applications almost daily, Waze, and I rely on it to tell me how to get around an ever changing commute.

I assume that the providers of these applications want to make money, possibly with advertising like Waze has started to do… so the performance of the applications from that perspective is equally important. They may need to provide SLA’s to advertisers or sell services to other software providers.

Here’s the rub–how do they monitor the performance of this application that is largely out of their control?

You can monitor applications today in several ways: with synthetic tests which replicate what a user would do on a website or application and see if that path is responsive. It works by driving a browser from a network location and duplicating actions a user would do, search, type, navigate. You can use a server agent to collect data on real users if my activities go through a server- you passively gather all the measurements when users travel the application delivery chain from the browser to the application server. But how would I manage the peer to peer/ crowd-sourcing performance? Crowd-sourced applications come from the endpoints of the internet – mostly phones- travelling over a variety of networks, cell, Wi-Fi, carriers and the patterns change all the time.

If you want to accurately monitor performance of web applications you need to follow the same delivery chain from the actual browsers that people use, if you want to monitor crowd-sourcing, I’m thinking you have to monitor from the crowd.

Read the source article at

Onward and Upward: Scaling Your Website into the Future

Onward and Upward: Scaling Your Website into the Future

Whether you’re a budding company or you’ve just launched a new website, chances are you’ll start experiencing steady increases in site traffic as you expand your business. You may be equipped for a few dozen customers now, but what happens when you reach thousands or (ideally) hundreds of thousands?

 If your site isn’t prepared for sudden or even gradual traffic spikes, you may end up with seriously slow response times or even a crashed server. It’s vital for online companies to have plans in place to deal with traffic surges—before they actually happen. Otherwise, your customers’ experiences can be negatively impacted by sluggish load times and downtime. As they say, an ounce of prevention is worth a pound of cure.

Scalability is a site’s ability to function in the same capacity with 100,000 users as it would with 10 users. There are two very important factors to consider when scaling your site: network utilization (bandwidth) and server utilization (CPU, RAM, etc.). You have a variety of options when it comes to implementing changes in your utilization of each, including auto-scaling, which automatically adjusts your server utilization depending on site traffic.

But even if you have your scaling in place, how can you be sure it works? An untested scaling system can be just as risky as not having one at all. It’s necessary to regularly test and monitor your site’s scalability to ensure that it pulls through when you need it to.

Read the source article at Apica

What to expect from application performance monitoring tools

What to expect from application performance monitoring tools

Without effective performance management, applications suffer response time delays, anger customers and ruin employee productivity. Trial-and-error troubleshooting and poor visibility into problems cause outages. Both situations make key business functions effectively unavailable for long periods of time, choking off the sales and production cycle.

An APM tool must discover root causes and then identify fixes rapidly. Look for tools that can troubleshoot proactively as well as reactively. Are problem reports easy to use and on-point? Can you drill down deeply into any tier of the architecture, or in the network, at multiple levels of the software stack (application, application server and database)?

Good application performance monitoring tools cause surprising jumps in customer and business/end-user satisfaction. IT teams can deliver up to an order-of-magnitude faster problem correction and cut out trial-and-error outages. Relatively small glitches are caught and rectified before they become big problems.

The tools help teams prevent problems, not just solve them. Through performance trends analysis, IT teams can lessen the burden of architecture upgrades and make more effective and cost-effective use of hardware. Without performance management, expect to see IT always fighting fires and applications suffer at key times.

Read the source article at Data Center information, news and tips

Blue Cross Data Breach Investigated

NEW YORK–(BUSINESS WIRE)–TheGrantLawFirm, PLLC is investigating whether Premera Blue Cross (“Premera”) has violated certain data breach and consumer laws arising from a May 5, 2014 cyberattack and seeks to obtain compensation for affected policyholders. The attack, which impacts over 11 million Premera policyholders, is one of the largest healthcare data breaches to ever occur. On March 17 …

Read the source article at

Scaling DevOps and Web IT

Scaling DevOps and Web IT

Prior to my assuming coverage of APM, I had several notes in process on DevOps and Web-scale IT that needed completion (note: I am not giving these areas up entirely – I will still continue to write on them as time permits). Last week while I was out on Spring Break with the kids, they were published online for Gartner subscribers. Here’s a brief summary of each one for those without access:

Web-Scale IT Is Closer Than You Might Think: In this note, I took a look at 32 technologies, processes and concepts that help to undergird what I call Web-scale IT. Things like DevOps, Open Compute, Web-Oriented Architecture and others. The verdict? While little of it is “mature” in terms of enterprise adoption, enterprises seeking to become more like the Amazons, Facebooks and Googles of the world have many if not most of the means to do so – today. Of course, Web-scale IT takes more than technology, etc., it also (usually) requires a significant technical skills base although there are vendors in the market today that are trying to address this issue.

How to Scale DevOps Beyond the Pilot Stage: Several Gartner clients are beyond the DevOps pilot stage and now need guidance on how to further broaden the implementation of DevOps internally.

Read the source article at Gartner Blog Network

Security and Privacy Breaches: A Better Approach


Security and Privacy Breaches

More firms believe they are prepared to react to security and privacy breaches, according to a new research survey. Organizations are preparing for threats, and research shows they are continuing to invest in security tools and strategy.

Cybercriminals are moving past new security measures, cracking and stealing customer and corporate data from the cloud. Midsize firms can better address security needs by continuously investing in development, implementation and maintenance of innovative and adaptable strategies to better combat malicious behavior without requiring additional resources.

Some key measures that can help address the business concerns of midsize IT include:

  • Security bundles that leverage flexible and on-demand know-how to protect against Internet threats
  • Identity life cycle management and managed identity services that enable user collaboration with provisioning support
  • Managed intrusion prevention and detection services

Read the source article at Midsize Insider

Complex Event Processing Market Worth $4,762 Million by 2019 – New Report by MarketsandMarkets


Complex Event Processing Market Worth $4,762 Million by 2019

The Complex Event Processing Market is expected to grow from $1,005 Million in 2014 to $4,762 Million in 2019.

MarketsandMarkets observes that there is an increasing demand of Complex Event Processing technology in government, defense and aerospace agencies due to the cost effectiveness and responsive operational technology, which can be integrated without disrupting traditional legacy system.

Nastel is one of the top companies that is already providing Complex Event Processing technology in their current integrated middleware software product portfolio.

Learn More:

Read the source article at

Leveraging User Activity Monitoring to Protect Information and Employees from Cyber Threats

User activity monitoring can play a key role in protecting both employees and organizations.  It is designed to look for activities that are anomalous or indicative of malicious intent. UAM doesn’t care whether the malicious activity is machine or human driven, and therefore it protects the employee against both malware and human theft of identity.

Read the source article at EMA Blog Community

Bridging legacy tech and cloud with middleware

Bridging legacy tech and cloud with middleware

IBM remembered that middleware is the tool to use to bridge disparate environments.  Instead of your current IT department with the one you just acquired, its now applying it to bridge between clouds.

“Cloud is everywhere,” said the IBM’s Don Boulia, VP of Cloud Services. The company is positioning itself as the ideal candidate to help the enterprise access cloud without sacrificing their systems of record. IBM is moving traditional middleware …

Read the source article at SiliconANGLE