Obamacare website cannot handle the load – a classical use case for AutoPilot

Several days ago I looked at the HealthCare.gov website, just to understand what Obamacare means. The website was up and working. Today one of our employees brought up to my attention an article published by Sharon Begley of Reuters stating that the Obamacare website is locked down a few days after the launch. The story caught my interest since it described a typical use case faced by many enterprises.

Several IT experts interviewed by Sharon Begley came up with different theories of potential reasons causing outage. Some stated that the flaw is in the architecture of the application and adding capacity may not help. One of them was quoted that there is a coding bug in the system. Another IT company stated that the problem is in the database access – the more you ask “the more it gets overwhelmed”. “The government officials blame persistent glitches” since they did not anticipate 8.5 million users within a few days. Another independent contractor brought up several hypotheses using an “overwhelming” term in regards to potential uploading of high number of Java script files to web browsers. An interesting probable cause was brought up comparing the situation to the DDOS attack on the website. The internal technicians tried to increase capacity by adding new servers and tune configuration to resolve the issue, but it did not help. Each group was trying to come up with explanations and possible root causes.

The truth is – they may all be right. But hearing all these assumptions and probable causes, indeed overwhelming to the people in charge. These are exactly the issues faced by many companies using various silo monitoring solutions for their mission critical applications. Dozens of IT personnel gather in war room meetings, go through finger-pointing and blame-storming sessions to identify probable causes that impact performance of their business services. While this is going on, the application is not performing. In fact I tried today the Obamacare website and it still does not work.

I am sure that people who designed the system are qualified professionals. They, most likely, went through a thorough design of the system, but like everywhere else the important topics such as high availability, reliability, scalability at all tiers of composite applications is an afterthought. Usually, under pressure of meeting target dates and relying on the existing infrastructure monitoring, not much attention is paid to scalability and root cause analysis. It looks like they did not anticipate the viral spread of the healthcare message, the desire and curiosity of people.

This application clearly needs a solution that monitors not only web browsers, databases, or servers, but also can diagnose the probable causes and predict potential failures. It should provide visibility to different tiers of transaction flows, anticipate performance bottlenecks at every step of the tier, end-to-end, and point to potential root causes before they occur. These are not new problems; we are dealing with them daily with our customers. The number of users and transactions in Obamacare is not overwhelming at this stage. I am sure if properly addressed they can achieve their goals and provide services to people that need medical care.

Happy 20th anniversary to WebSphere MQ

In September 1993 IBM released its first version of asynchronous Message Queuing (MQ) product called MQSeries. Today this product known as WebSphere MQ dominates the market and is being used by more than 90% of fortune 1000 enterprises as a messaging middleware platform for mission critical applications. What an achievement!!!

I want to wish a happy 20th anniversary to all the people that made it happen from both IBM Hursley, UK and from System Strategies, the company that developed and released MQ in the joint venture with IBM on multiple distributed platforms including OS/400, OS2, AIX, Tandem, VMS and DOS/VSE. Actually, I had a privilege to manage the implementation of MQ.

How far have we come in the last 20 years for MQ management? Many software companies built their practices around IBM’s messaging product, but where are they today?

At the end of nineties and the beginning of this century, Gartner had a special Magic Quadrant covering solution providers for MQSeries. Nastel, the company I founded in 1994, was among the first to introduce MQ management on distributed platforms and was ranked as a “visionary” vendor. As a startup company focused on middleware management, we successfully competed against giants such as Boole & Babbage, Candle, Tivoli, BMC, Landmark. BMC, Boole & Babbage, and Landmark all OEM’ed our old MQControl technology.

Although a lot has changed in the past 20 years, Nastel remains focused on middleware management while other competitors have been either acquired or shifted focus to other technology areas. As we look forward to the next 20 years, Nastel is prepared to meet the challenge of addressing the needs of enterprises, financial institutions, retail operations and government agencies with management, monitoring and self-service of WebSphere MQ family of products.

A new breed of Middleware technologies such as new messaging transport layers, ESBs, Message Brokers, etc. are continuously being introduced to the market. These technologies interact with each other and must be highly available and highly reliable to keep continuous performance of mission critical applications. Each middleware vendor provides its own administration or management instrumentation, but has little domain expertise in monitoring requirements. And it’s obvious: monitoring and management is not their business.

Enterprises must deal with the complexity of various middleware technologies and silo tools that do not provide visibility into the interdependency within composite applications. This forces companies to hire highly skilled employees savvy in various technologies, products and multiple operating systems that have to understand not only technologies, but also what it would take to manage and monitor them. These multi-subject subject matter experts build their own tools, write scripts in languages they are personally comfortable with to have ready answers and avoid finger-pointing in the war rooms with dozens of people. Although it may not be immediately evident, companies spend substantial amounts of money and resources in maintaining homegrown solutions, trying to keep with the pace of version changes, technologies and OS upgrades Enterprises that take this approach are exposed to risks of internal multi-subject matter experts turnover, their business services are at risk. . In those cases security is an afterthought and usually homegrown products are in violation of internal auditing and security compliance requirements.

An easily extensible, secure and consolidated middleware monitoring solution can address these pain-points. Middleware management is a segment of the overall APM market that requires deep domain expertise in specific technologies to provide not only alerts, but also diagnostics and visibility into internal interactions. For multi-tier composite applications the messaging layer and especially visibility into an ESB tiers is very important. One person who is a product manager of a leading APM company told me that “understanding the middleware layer from the monitoring perspective is similar to being a brain surgeon”.

In summary, in my view the most pressing topic in the near future in middleware management is simplifying the underlying complexity. It includes providing a unified Middleware Monitoring for the infrastructure groups, well defined highly secured Self Service for development and pre-production teams, quick root cause analysis and problem resolution for DEVOPS. Integration with the corporate eco-system, its Enterprise management products and security, such as LDAP and Kerberos is essential.

I’d like to know your opinions on this topic. Please let me know what your thoughts are…email me at dmavashev@nastel.com

Acquisitions of CEP technologies

Two acquisitions of CEP (Complex Event Processing) technology companies were announced a couple of weeks ago. Both were a bit surprising to me. It caught my attention because the core of our AutoPilot solution and the reason we win against our competitors is our internally developed CEP engine that helps us provide proactive monitoring to our customers.

For the past 10 years Tibco claimed to be the thought leader in CEP, which is the core of their Business Event Processing (BEP) offering. It looks like StreamBase, which used to be the only pure play CEP company, had a better technology, announced Big Data integration and had more traction than Tibco in capital markets.

Software AG claims to have a CEP engine as part of their WebMethods Business Events offering yet acquired Apama from Progress. Certainly Progress does not have any vision of CEP in their future plans.

Interesting, isn’t it?

Some thoughts on APM segmentation

There are many software vendors play in the Application Performance Monitoring (APM) market. All claim similar capabilities and features as defined by industry analysts. From the first glance the differences are blurry and customers have difficulties distinguishing among the products that would best address their needs. Companies spend months, in some cases years selecting the right solution engaging valuable resources. In the mean time their problems persist, in many cases affecting their bottom line. There are some vendors that are classified as APM players, have very little overlap in their value proposition, have clear differences and distinct value propositions, yet perceived as competitors by each other.

In my view APM as a general term in IT, similar to a classification of mammals in biology. Imagine an alien (a prospect) arriving to Earth, seeing different mammals (APM vendors) and having difficulties distinguishing among them. They all have very similar features, have eyes and ears, nose and mouth, eat and sleep, some walk on four and some use just two limbs, they even have a similar reproduction process. Obviously, there are big differences between humans and monkeys, or between zebras and lions, for example.

In my opinion the APM market should be broken down into well defined segments to make it easier for customers. We here, at Nastel, have strong roots in Message Oriented Middleware management. IBM’s WebSphere MQ family of products, ESB, Message Broker, DataPower, Tibco are technology applications used by major enterprises. Middleware is a nervous system of their mission critical applications must be up, running and performing. We call our target market – Middleware-centric APM. I believe we are the best and have a unique value proposition in this area.

Announcing “Mav in the middle…”

This is the blog of David Mavashev, CEO and Founder of Nastel Technologies. It is called “Mav in the middle…”

The blog will cover topics in application performance monitoring (APM), Complex Event Processing (CEP), self-service, message tracking, middleware monitoring / management and also general observations about the IT industry.

The first post will be coming soon.  Stay tuned.

Visit and then bookmark the blog at: http://www.nastel.com/mav-in-the-middle/