The Inquirer-Home

Intel Vanderpool holds promise, some pitfalls

Tech Guide Part One
Fri Feb 25 2005, 08:58
INTEL INTRODUCED VT or Vanderpool Technology a few IDFs ago with great fanfare and little hard information. Since then, as the technology got closer and closer to release, there has been a little more info, but even more questions. In the following four part article, I will tell you a little about what VT is, and what it does for you. The first part is about virtualisation, what it is, what it does, followed by what problems it has in the second part. The third chapter will be more on Vanderpool (VT) itself and how it works on a technical level. The closing chapter will be on the uses of VT in the real world, and most likely how you will see, or hopefully not see, it in action.

Virtualisation is a way to run multiple operating systems on the same machine at the same time. It is akin to multitasking, but where multitasking allows you to run multiple programs on one OS on one set of hardware, virtualisation allows multiple OSes on one set of hardware. This can be very useful for security and uptime purposes, but it comes at a cost.

Imagine an OS that you can load in nearly no time, and if it crashes, you can simply throw it out, and quickly load a new one. If you have several of these running at the same time, you can shut one down and shunt the work off to the other ones while you are loading a fresh image. If you have five copies of Redhat running Apache, and one goes belly up, no problem. Simply pass incoming requests to the other four while the fifth is reloading.

If you save 'snapshots' of a running OS, you can reload it every time something unpleasant happens. Get hacked? Reload the image from a clean state and patch it up, quick. Virused? Same thing. Virtualisation provides the ability to reinstall an OS on the fly without reimaging a hard drive like you would with Ghost. You can simply load, unload and save OSes like programs.

It also allows you to run multiple different OSes on the same box at the same time. If you are a developer that needs to write code that will run on 95, 98, ME, 2000 and XP, you can have five machines on your desk or one with five virtual OSes running. Need to have every version of every browser to check your code against, but MS won't let you do something as blindingly obvious as downgrading IE? Just load the old image, or better yet, run them all at once.

Another great example would be for a web hosting company. If you have 50 users on an average computer, each running a low level web site, you can have 50 boxes or one. 50 servers is the expensive way to go, very expensive, but also very secure. One is the sane way to go, that is until one person wants Cold Fusion installed, but that conflicts with the custom mods of customer 17, and moron 32 has a script that takes them all down every Thursday morning at 3:28am. This triggers a big headache for tech support as they get hit with 50 calls when there should be one.

Virtualisation fixes this by giving each user what appears to be their own computer. For all they know they are on a box to themselves, no muss, no fuss. If they want plain vanilla SuSE, Redhat with custom mods, or a Cold Fusion setup that only they understand, no problem. That script that crashes the machine? It crashes an instance, and with any luck, it will be reloaded before the person even notices the server went down, even if they are up at 3:28am. No one else on the box even notices.

But not all is well in virtulisation land. The most obvious thing is that 50 copies of an OS on a computer take up more resources and lead to a more expensive server. That is true, and it is hard to get around under any circumstances, more things loaded take more memory.

The real killer is the overhead. There are several techniques for virtualisation, but they all come with a performance hit. This number varies wildly with CPU, OS, workload and number of OSes you are running, and I do mean wildly. Estimates I hear run from 10% CPU time to over 40%, so it really is a 'depends' situation. If you are near the 40% mark, you are probably second guessing the sanity of using a VM in the first place.

The idea behind VT is to lower the cost of doing this virtualisation while possibly adding a few bells and whistles to the whole concept. Before we dig into how this works, it helps if you know a little more about how virtualisation accomplishes the seemingly magic task of having multiple OSes running on one CPU.

There are three main types of virtualisation: Paravirtualisation, Binary Translation and emulation. The one you may be familiar with is emulation, you can have a Super Nintendo emulator running in a window on XP, and in another you have Playstation emulator, and in the last you have a virtual Atari 2600. This can be considered the most basic form of virtualisation, as far as any game running is concerned, it is running on the original hardware. Emulation is really expensive in terms of CPU overhead, if you have to fake every bit of the hardware, it can take a lot of time and headaches. You simply have to jump through a lot of hoops, and do it perfectly.

The other end of the spectrum is the method currently in vogue, and endorsed by everyone under the sun, Sun included, Paravirtualisation (PV). PV is a hack, somewhat literally, it makes the hosted OSes aware that they are in a virtualised environment, and modifies them so they will play nice. The OSes need to be tweaked for this method, and there has to be a back and forth between the OS writers and the virtualisation people. In this regard, it isn't as much a complete virtualisation as it is a cooperative relationship.

PV works very well for open source OSes where you can tweak what you want in order to get them to play nice. Linux, xBSD and others are perfect PV candidates, Windows is not. This probably explains why RedHat and Novell were all touting Xen last week and MS was not on the list of cheerleaders.

The middle ground is probably the best route in terms of tradeoffs; it is Binary Translation (BT). What this does is look at what the guest OS is trying to do and changes it on the fly. If the OS tries to execute instruction XYZ, and XYZ will cause problems to the virtualisation engine, it will change XYZ to something more palatable and fake the results of what XYZ should have returned. This is tricky work, and can be CPU time consuming, both for the monitoring and the fancy footwork required to have it all not blow up. Replacing one instruction with dozens of others is not a way to make things run faster.

When you add in things like self modifying code, you get headaches that mere mortals should not have. None of the problems have a simple solution, and all involve tradeoffs to one extent or another. Very few problems in this area are solved, most are just worked around with the least amount of pain possible. µ

Parts two to four to follow

 

Share this:

blog comments powered by Disqus
Advertisement
Subscribe to INQ newsletters

Sign up for INQbot – a weekly roundup of the best from the INQ

Advertisement
INQ Poll

Heartbleed bug discovered in OpenSSL

Have you reacted to Heartbleed?