In the 1970’s a CPU such as the Z80 held about 8500 transistors, and in the 1980’s an 8086 processor held around 29,000 transistors, Todays top end CPU’s hold close to 20 Billion transistors.

Systems used to contain 1000 bytes of Ram, today’s desktops generally contain 16,000,000,000 or more of RAM.

Secondary storage of 10 million bytes was a lot in the 1980’s and today 10 TB’s would be seen as a nice amount for a home computer.

Everything is now connected by high speed networks and internal high-speed buses that allow datacenters, or cloud environments to contain thousands of high-end physical systems running operating systems that allow the hardware to be virtualized as many thousands of logical systems.

All of this complexity is abstracted to manageable units of technology. The individual components inside a silicon chip or a hard disk are no longer considered individually, as the sheer volume of them, makes such a task overly complex. Instead we consider larger and larger blocks of technology as manageable components.

The power of abstraction is that it allows command and control to be maintained, as complexity grows.

A simple way of looking at this process is to say that a large number of smaller components is harder to manage than a smaller number of large components.

We no longer see each subsystem as important, but the resultant large system is considered. When a hard disk in a raid system fails, it’s work is passed to other hard disks, and a monitor will alert an operator to swap out the failed unit. But the work continues uninterrupted.

The same goes with entire servers, a failure just means that the work is passed to other servers. In some environments the failed server is just left in the rack, not even swapped out.

Abstraction allows for all of this.

And yet when we come to monitoring business applications, many solutions fail to abstract up to the business level, continuing to just monitor at the technology level. This means that when a problem occurs the technology in use, only alerts to the technology level and not at the business level. It is left to the operators of all the technology systems to get together and debate why a business process is not working as expected.

There is a way to abstract the technology all the way up to the business processes. And where this is employed, a subtle ripple at the business processes can be identified before it is noticeable to the business users. These ripples can be monitored all the way back through the stacks of technology, so that complex relationships between many systems can be understood and their impacts on the business can be monitored and even forecast. This provides an entirely more advanced way to consider monitoring and controlling all the technology involved in delivering business. The result of thinking this way is that failures are dramatically less frequent, and the time to repair the few that may still occur are dramatically quicker. The overall cost and complexity of management is also significantly lower.

Lower costs, higher availability, Better MTBF and MTTR

Nastel is the world’s leader in providing business level monitoring for large enterprises.

To find out more visit