The 'Why on the desktop first?', other than the fact that it was ready, is simply to allow people to get their feet wet with the new tech. If you are writing a VMM, not an easy task in any event, it is much easier to debug if you have something to run your code on. VT is scheduled for desktop introduction this year, and in servers presumably early in 2006. This gives VMM hackers 6 months to tweak things on production chips rather than finicky or buggy development platforms. Think training wheels more than world changing killer apps for now.
Looking out into the future, there are some interesting scenarios I can envision low cost virtualization being used in. The traditional uses described in part one of this series are obviously not going to go away, in fact they will become a lot more prevalent with lower overhead. The server side does not look to have many major upheavals with the introduction of VT.
The user side of the world may have some changes, but they are far out. The first class of things revolves around corporate user and machine management. If your VMM is part of your management package, you can load, unload, and tweak things right under the nose of the user.
If they are using resources in a non-approved way, you can throttle them down, load or unload things on their HD, and even potentially patch programs on the fly. If they manage to muck the OS up to a degree that is all to common in modern corporate life, you simply blow the OS instance away and load up another snapshot.
As a management tool, it can be everything a BOFH dreams of. Unintrusive unless you want it to be, undetectable, and impenetrable by clueless users. Spyware? Viruses? No problem, they can go away with the click of a mouse on a management console half a continent away.
This will take some time to trickle down to anything but the biggest of companies, but it will happen. The fact that it will soon be included as standard in all Intel chips guarantees that it will be used to some degree or other everywhere. The only question in my mind is how long it will take.
Further out in the nebulous timeline of IT progress comes the more interesting uses of virtualization. Instead of having your OS be completely virtualized, imagine a partially virtualized OS. Every program can be run in its own virtual machine, and messages passed back and forth in shared memory. It would be like hardware enforced threads, you spawn a new VM and run the program in it.
Other than the stability issue, haywire programs can not get out of the VM and step on critical processes. This is a huge security benefit. The best is was one that will probably be a moot point by the time it happens. Three years ago, MS promised us in two years or so that they would have security under complete control, it is after all a Bill Gates proclamation.
In the off chance that MS is not 100% secure by this time a year ago, VMs can help. One of the ideas tossed out by the Intel engineers was running IE in a VM. When you are done browsing, you shut down the VM, and all the malware and crud that comes along with running that browser goes off into the ether with nary a poof.
If you set things up right so that the browser has specific information pulled from it before it shuts down rather than it writing all over the OS, it would be very hard for a virus to spread. When you run IE next time, it loads up a clean image, and has information like bookmarks and cookies pushed to it. While it is not an uncorruptible paradigm, it will certainly be much harder to circumvent controls that VT could put into place. Luckily, this will be a moot point by then, MS promised.
There is one other synergy that was not all that well publicized. Another of the so called 'T' technologies LeGrande Technology, or LT, will work well with VT. LT is hardware security and encryption, more or less security on a chip according to its proponents. With hardware virtualised OSes hosted on a machine, you have the perfect platform for the 'secure side' and 'insecure side' demo from a few years ago.
Each of these so called boxes could be a hosted OS, or a hosted program, with hardware enforced partitions, you could theoretically do all the things promised in a LeGrande system at a very low cost. With LT bringing an encryption engine into the mix, the whole thing could be accomplished with very low overhead.
So, when all is said and done, what do we end up with? We have a set of technologies that vastly speed up virtualization through the addition of a new mode of CPU operation. This overarching mode can be thought of as a new more powerful ring, or a way of compartmentalizing the entire 'old way'. However you care to visualize it, just realize it is more of a superset of the x86 commands, and IPF for VT-i, than anything else.
Out of the gate there should be little that uses it, VT won't hit with a bang, more of a quiet whimper. Server software will catch up to Vanderpool in short order, it is simply too big a gain for it not to be jumped on. On the desktop, there will be a much bigger lag, not because of the hardware, but because there currently is little use for it. Apps will catch up to it, that's for sure, but it will take longer for interesting things to come out of the development labs.
In the mean time, everyone will be jumping on board. OS writers are working feverishly to make their OSes more Paravirutalization friendly, app developers are doing the same, and other hardware makers are announcing similar or complimentary technologies. IBM has had similar concepts in its Power line for while, but not on the desktop PPC line. AMD has announced the Pacifica name, but talked little about specifics. We expect this technology to come out in the not too distant future.
The next year will see everyone catching up with Intel on Vanderpool. Pretty soon after that, VT will be just another thing found in all CPUs, and the OS will no longer be the king of a machine. Virtualization will run faster and smoother, servers and desktops will have a bunch of new tricks they can pull off, and we will all move on to looking at the next new paradigm enhancing technology.µ
I would like to thank Rich Uhlig, Principal Engineer, Corporate Technology Group Dion Rodgers, Senior Principal Engineer, Digital Enterprise Group Patrick Bohart, Vanderpool Technology Marketing Manager, Digital Enterprise Group, and Christine Dotts of Intel for their help with this article. Because they did help.
Sign up for INQbot – a weekly roundup of the best from the INQ