Nastel recently joined a conversation on Infoworld.com around what writer Dan Tynan described as “IT’s worst addictions.” From our perspective, data is one of these addictions, and by far one of the biggest pain points that the IT industry is forced to deal with today. ‘Big Data’ is the hot potato of 2012—everyone is talking about it and the term is increasingly tossed around within operational circles in organizations. Everyone wants to know “What are we doing about Big Data?” or “What’s our Big Data strategy?” If you’re a vendor and you don’t have a solution to manage it, analyze it, optimize it, minimize it, or otherwise be involved in Big Data, you’ve missed the bandwagon.
How much of it is hype?
Mr. Tynan correctly identified data as an IT addiction to be dealt with. Technology and culture are pushing us toward a critical point where information will be generated faster than we can physically store it, let alone use it, yet in Tynan’s words, “many IT pros are unrepentant information junkies.” Instead of collecting data because it can, IT needs to focus on collecting data that it can legitimately use being more selective. However the problem is even worse that Mr. Tynan thought. What he didn’t consider is that as our systems grow larger, spreading to the cloud and creating the “big data” he speaks of there is another sort of big data. This data comes from monitoring the “big applications” and trying to prevent fault. Yet, often the secret to successful application performance monitoring is in knowing what data to discard.
Another pain points in IT is the notion of control. IT workers jockey for stability and a pseudo-control over their environments, allowing themselves to become comfortable with things like operational silos, long-term vendor relationships and even looking at problems from an ‘It’s not me’ perspective. When the business is forced into silos rather than obtaining a holistic view of its infrastructure, performance suffers and frequently there is customer attrition. In today’s competitive marketplace, that’s no way to run a railroad.
At a micro level, we see this all too often in application performance monitoring. Organizations we engage with—many of which are leaders at the top of their game in their respective industries (financial services, insurance, healthcare, and others)—are still managing their middleware-centric applications in silos with separate monitoring tools for each. Therefore, it’s not surprising that many of them have trouble pinpointing problems in composite applications that span these silos, let alone tracking message flows from one application to another or from distributed to mainframe.
We encourage these enterprises to get closer to the information and connect the dots between these multiple silos. This may mean tool consolidation which can help reduce both cost and risk or at least feeding all of the monitoring events into something that can correlate the information, making it actionable. Ultimately, knowledge is power, not raw, silo-specific data. Implementing a single point of view where IT can monitor multiple stacks can be very helpful in managing this. Even more so, using what Gartner calls a pattern-based strategy, where via automation complex patterns can be discovered in large volumes of data IT managers can quickly determine the root cause of application performance issues and as a result ensure high service levels to their customers.