For the last seven years, I have been developing applications for Virtual Machines(VM) as a part of the Leostream Corporation. This market has developed from essentially an academic research project to a multi-billion dollar a year ecosystem with many players. The technologies involved in virtualization are well covered in Wikipedia and elsewhere, but they are summarized here and links to more detailed information is provided. Much of this has been discussed in the standard industry media outlets for some time, but I’d like to offer my perspective on the industry, its players and where things are likely to go.

The Technology

Below, three broad categories of virtualization are discussed that create isolated environments on a single, physical system. The computer system that runs this kind of technology has physical hardware and is typically called a host system. The isolated environments hosted on such systems are called guests and run entirely as software. Although the term virtualization has recently come to mean just paravirtualization, the technologies discussed all can be used to varying degrees for the applications listed later in this article.

Any hardware can be emulated in software. Emulators have been critically important to firmware developers who need to design against drivers for hardware that isn’t available to them. Developers of console games and mobile phone applications frequently use emulators to reduce development cycles.

Although emulation allows any machine architecture to be hosted on the user’s workstation, the emulated guest typically lacks performance. There is a lot of overhead in the application pretending to be a different machine. For many data center applications of virtualization, emulated x86 machines do not perform well enough. However, for development purposes, emulation is often adequate.

Another common approach to creating an isolated operating system (OS) environment is the use of partitioning. Partitioning takes many forms. The designers of Unix were trying to create a multiuser version of a single time-sharing system called Multics. In a standard Unix system, many users may be logged into the system, but each has his own directory and process space. However, users can see the files and processes of other users on the systems, which can lead to some security concerns.

To improve on the initial security model of Unix, chroot was developed to more fully isolate users more from each. An extreme extension of this idea can be found in User-mode Linux, in which each user appears to have a dedicated copy of the underlying OS. In UML, an unique instance of the filesystem appears to each user. With Virtuozzo Containers or Solaris Containers, Windows or Solaris OSes can be partitioned too.

From the hardware/OS point of view, partitioning is very cheap to implement. Many “guest” containers can be hosted on even commodity hardware with decent performance. The drawback is that the guests are typically copies of the underlying OS. For instance, a Virtuozzo server running on Windows 2003 can only have containers running the same version of Windows 2003 as the host.

For performance and flexibility, paravirtualization is the current state of the art in hosting technology. This technique uses as much of the bare metal components of the host machine do as possible to do work for the guests. Typically, guests see a standard set of virtual hardware, virtual storage and BIOS irrespective of the actual hardware of the host. Virtualized guests can run nearly any OS that can run on the host system’s architecture.


Paravirtualization is a flexible compromise between emulation, which allows any OS for any architecture in the guest to run, and containers, that allow a very high number of guests that run a version of the host OS. Early paravirtualization software runs mostly as a usermode application hosted in a well-known OS like Windows or Linux. However, hypervisors have begun to replace these older applications. Hypervisors are tiny OS kernels optimized to run as many VMs as possible. Hypervisors are controlled either through a special console VM or a client application that runs on the administrator’s workstation.

Business Drivers of Virtualization

To understand the business-case for virtualization technology, a brief digression into recent Information Technology (IT) history is productive.

At the end of the twentieth century, many businesses had accumulated a not-so-small army of workstations, dedicated servers, minicomputers and mainframes. While there were many business-critical applications that demanded the entire computation and I/O horsepower of the machines on which they were hosted, there also existed a growing class of under-utilized machines. These machines often served lightly-used, but important applications that were not easily be migrated to a new hosts or capable of sharing a single host with other applications.

The proliferation of these one-off machines burdened IT budgets. There are several costs associated with maintaining a machine in a corporate datacenter including the following: power, air conditioning, monitoring, hardware replacement and rackspace. These are recurring costs that quickly exceed the original price of the hardware. Despite this, server applications (particularly web servers and databases) continued to be in demand throughout most organizations through the late 1990’s and 2000’s.

The utility of ubiquitous computing in an organization had very measurable results in productivity and profit, but the maintenance and security of physical machines introduced really concerns for IT and management. Fortunately, three events happened that would ameliorate these issues: fast processors, fast and ubiquitous networking, and VMware.

VMware has its roots in a research project at Standford. The idea was to bring paravirtualization from the IBM mainframe world into the more limited x86 platform. The i386 could easily emulate 8086 machines in hardware, but could not easily handle its own more sophisticated architecture. VMware created a product that could create pentium-class Virtual Machines (VM) on pentium hosts, with better performance than emulation.

Moore’s law fits with virtualization very well. By the mid-2000’s, 32-bit Pentium 4s and similar AMD chips could host, with enough RAM, 4-10 VMs. 64-bit CPUs with multiple cores and hardware support for virtualization improved the host’s ability to run dozens of guests.

The Applications of Virtualization

There are three general uses of virtualization technology that achieve real business goals today: server consolidation, development and hosted desktops. While any of the virtualization methods previously outlined will work for these applications, some technologies marry better to some purposes than others. Let’s first discuss these applications and the business drivers behind them.

Server consolidation is the process of replacing dedicated physical machines with guests running on a few well-provisioned hosts. This is often what brings virtualization into IT departments. By consolidating hardware, maintenance and power savings become immediately clear. Sometimes, end-users see the benefits of consolidation when their application moves from a older host to a VM running on much newer and faster hardware.

Consolidation is most often done with paravirtualization, which often requires very little adjustment of the target legacy application. The process of migrating a physical host to a virtual machine can be done either manually through backups or automated with P-to-V software, like the kind Leostream used to sell. These days, there are a number of commodity tools available to handle P2V conversions.

Application development and quality assurance befits incredibly from any kind of virtualization technology. Developers and QA engineers often need access to a number of machines with specific OS versions and patch levels. Because this use-case generates a lot of sparsely used machines quickly, the need for managing libraries of guests across across dozens of hosts lead Leostream to develop the Virtual Machine Controller in 2003.

This group of users tends to be inside the corporate LAN using high-speed network connections to get console access to their VMs. This profile is very different from the folks in the next group.

Hosted desktops are an attempt by organizations to replace standard workstations with thin client terminals or terminal emulators. By keeping desktops in the data centers, sensitive information stays within the corporate firewall and support costs for workstations reduces significantly. Users benefit also by not being tied to one workstation and can get to their virtual desktops using some remote protocol like RDP, VNC or ICA.

While remote access to machines isn’t that revolutionary for most IT workers, there are many challenges to moving the general population to remote desktops. The most pressing is in matching up users with target desktops. While a 1-1 mapping of users to machines isn’t too hard to manage, VMs offer much richer schemes with complex policies. As the task of managing users and machine increases, the need for special connection broker software becomes critical.

The basic components of a host desktop deployment are a client, some kind of broker and a target desktops. This simple picture hides a wealth of complexity found in large deployments, in which SSLVPNs, corporate network policies, staffing policies, and departmental hardware and software requirements often create a tangled skein indeed.

Because VMs are often used as the target desktops, VMware created the term Virtual Desktop Initiative (VDI) to describe all the pieces involved in a hosted desktop deployment. However, many companies are using a hybrid approach to hosting desktops that uses physical machines, VMs and Citrix.

Many companies are currently pursuing the hosted desktop market, which some analysts have predicted will be far larger than the server consolidation market. If most large companies decide to replace their employee’s workstations with thin clients, the market will be very large.

The Virtualization Layer Players

Below are the four most interesting players offering virtualization layers. This, of course, is an arbitrary list, but a useful one for someone new to the field. There are many, many more players in the virtualization space that offer more specialized offerings.

It would be hard not to mention VMware first in any discussion of virtualization. Although they did not invent the idea (IBM did), they have been the leader in x86 virtualization since 1998. VMware now offers a number of free to use virtualization products including VMware server and ESX 3i, but of which I use at home.

In 2003, Microsoft bought Connectix, makers of Virtual PC and entered the virtualization market. After some early missteps with Virtual Server, Microsoft has bundled Windows 2008 and Windows 7 with a hypervisor that should offer VMware some competition at the commodity end of the market. It is my impression that Microsoft doesn’t quite know what they expect this technology to do for them. It’s not a simple mass-market product like MS Office. Right now, virtualization is an enterprise IT thing. Perhaps Microsoft is setting the stage for ubiquitous client-side hypervisors, but it’s not clear to me how that will benefit them. Virtual Computer, on the other hand, should do quite well in this kind of world.

In 2007, Citrix, who brought remote desktops to Windows in the 1990’s, bought XenSource, the commercial arm of the Open Source Xen virtualization layer. Xen has been bundled into XenApp, which used to be called Presentation Server. Citrix appears to be using Xen to capture the server consolidation market, but is hedging its bets in hosted desktops with Xen. Perhaps 2010 will be a crossover year for them.

Parallels is a youngish company that offers a compelling combination of paravirtualization and containers (through their merger with SWSoft). Unfortunately, I don’t believe their marketing is cutting through the noise of the bigger elephants in the virtual room. However, they continue to do descent business. Parallel Containers (nee Virtuozzo) is very popular with ISPs who need to cut costs at every opportunity.

There is another group of virtualization vendors that deserve mention. This group includes some well-known names as well as some that you may not have heard of. In your own projects, you may find the wares offered by these companies to be very compelling.

IBM is the original inventor and patent holder of virtualization on their mainframe equipment. The offer a broad spectrum of products and services around virtualization, including the HC12 teradici-enabled blades that offer the very high performance PC-over-IP remote access protocol. If I could afford it, I would buy four of these blades and client pucks for my home.

Sun (or is it Oracle now?) offers the entire VDI stack, just not as a boxed product. Desktops can be hosted on Sun SANs served through Solaris Containers or VirtualBox, brokered through their connection broker and accessed using Sun Ray thin clients. Since I worked a lot with the Sun Ray thin clients, I’m a little biased towards them. Their APL protocol optimizes the desktop session over high-latency networks. If your users are primarily outside the corporate LAN, this is pretty much the only thin client to use.

VirtualIron offers an optimized Xen hypervisor with a proprietary Java management interface. They have succeeded in targeting the server consolidation market. Their product is a snap to install and scales well horizontally.

Virtualization has been a core part of Linux for a while now. Red Hat and Suse both ship with the Xen/KVM/QEMU virtualization suite and tools to easily create Windows VMs on host machines with VT-enabled, 64-bit processors. I have been disappointed that neither company has gone after the hosted desktop market with the zeal I’d expect. But it is still early in that market.

VirtualComputer is leading the charge in client-side hypervisors. This idea is a bit like VMware’s ACE in that the user has a VM on his laptop that can be management by corporate IT. The critical difference is that the laptop is running a hypervisor and the user is experiencing the VM as if it were the console. Potentially, you could swap between several VM images. The advantage of this abstraction is that the corporate image can be locked down tightly while allowing a more open OS image to be used at the discretion of the end user.

The Future

The immediate future of virtualization is clear: it will become completely ubiquitous. End users will become comfortable connect to remote machines and the methods to make those connections will become faster and more transparent. Service providers like Comcast and Verizon will rent VMs to their Internet or cable TV customers, who will use their digital receiver as a thin client. The technology to do this exists today.

Another fallout of pervasive VMs will be the rise of virtual appliances. Already, there is a small cadre of networking applications out there. I see virtual appliances becoming the dominant paradigm for shipping certain kinds of applications. Think of this is a kind of Software as a Service to-go. I have first hand experience developing for virtual appliances and the benefits to the publisher are many. What’s lacking are installation tools that making virtual appliances install like traditional ones. But I’m sure someone is tackling that problem now.

Virtualization provokes a disturbing question for OS vendors: who controls the hardware? Traditionally, the OS made the hardware accessible to applications. Hypervisors force the OS to a higher level in the application stack. Will hypervisors soon appear only in BIOS or hardware? If so, what does that mean for OS vendors? Nothing good.

Makers of peripheral hardware may also lament the rise of VMs. You cannot install a new video card in a virtual machine, for instance. I suppose there may arise an open standard that allows VMs to more fully access physical hardware, but none now exists.

Software vendors are the clear benefactors of virtualization. There’s a whole new class of problems to solve.