Yesterday EMC and VMware held a conference for presenting new developments and upcoming products for 2009. EMC presented information for future directions and upcoming products, for maintaining and improving the datacenter. VMware presented also new directions but also the rationale for using cloud computing.
This got me thinking of virtualization for improving unexpected application uptime and how virtualization actually affects the resiliency of services.
The main idea behind virtualization of servers for resiliency is the possibility of disconnecting the actual hardware installed from the hardware presented to the application, thus providing a facility for facilitating moving applications from one physical server to a different one. The most mentioned reason for changing the physical server is hardware failure and improving application uptime.
Unfortunately, hardware failure is not the major reason for application downtime and this can be easily seen in the above diagram
This article is from 1999, where only 20 percent of the problems were due to the technology, and things only got worse because the following happened:
- hardware became a commodity
- software became more complex
- we have more abstraction layers
- managed programming languages
From the diagram one can see where things are heading; Virtualization helps with the flexibility of managing and fixing some of the problems, but regarding application downtime, virtualization can’t change the human related problems.
This proves one of the most important principles in design; the human is the weakest link because all other problems can be solved with additional technology. Thus most emergency systems have only one human interaction – starting the process and get out of the way. From that point everything is done automatically.
We can see here how basic design principles help analyze even new technologies and directions.
If you read this far, you should follow me on twitter here.