CopperEgg Adds Global Monitoring Stations, But How Useful Is Visibility Without Control?

The modern economy is very different to that of days gone by. While in the past the most important thing was to remember the first names and birthdays of your regular customers, organizations have far more to concern themselves with today. As more and more organizations rely upon the performance of various website for their survival, there has been a corresponding increase in the number of vendors offering performance monitoring. The bonus of these solutions is that they get to throw around buzzwords – distributed, cloud, SaaS etc. One such of these companies is CopperEgg, a three years old company that offers performance monitoring across public and private clouds. In one of those statistics that sounds really impressive despite meaning nothing to most people, CopperEgg boasts of processing 100 billion transactions per day.

But as organizations start to deliver their applications globally, they need to be able to asses the performance of those applications similarly globally. Hence the news today that CopperEgg is adding a number of new test stations to its catalog, CopperEgg test station locations now include Sydney, Australia; São Paulo, Brazil; and Amsterdam, Netherlands. CopperEgg is also boasting that they test services every 1.6 seconds which in this version of the arms race is apparently 36 times more frequently than the competition. They’re even delivering new weekly, monthly and (hold your excitement please) custom timeframe uptime reports.

But really it doesn’t matter much…

Don’t get me wrong, I’m sure CopperEgg is a great product and web monitoring generally is absolutely important in this complex and digital-enabled world. But visibility without control is something that always seems like just half of the formula. I believe that the future of IT will be typified by many things but one will be an adherence to continual improvement – the idea of Plan/Do/Check/Act is an important part of this shift. I also believe that as the pace of change within organizations ramps up, It departments will be looking for broad platforms that provide a closed loop for various parts of their operation.

In the past I’ve talked about the rise of “fabrics”, solutions that span a broad range of different services. I’ve felt that the depth of functionality is less important than the question of whether the fabric spans the breadth of tools in use within the organization (and, for clarification, these fabrics appears in multiple places – enterprise social networking, integration services, infrastructure management, are all examples of fabrics).  One could argue that performance monitoring in and of itself is enough, but it seems to me that the next logical step from the insights delivered through monitoring is to actually be able to do something with those insights – to take action from within the monitoring tool.

Interestingly enough Forrester recently published their “State of IT Monitoring” study. One of the key findings in the survey was that having too many IT tools is a real issue within enterprises – of the respondents, 14% are using 20 or more different tools to find and fix issues – obviously this leads to slow detection of problems and, more importantly, increased time to resolution. This fragmented approach towards IT – with separate tools for monitoring, for management and for different parts of the IT operation, just doesn’t work in this more complex and performance-critical environment.

So, over to the readership. Is standalone performance monitoring going to be a viable proposition moving forward or will monitoring be rolled into management platforms?

2 Comments

Leave a Reply