Manage Learn to apply best practices and optimize your operations.

A look at approaches to virtualization

In this article, Anil Desai distinguishes among virtualization at the hardware, server and application levels and provides guidelines for evaluating which works best.

Virtualization is a general term that can apply to many different technologies. For example, storage systems, databases and networks can all be virtualized in one way or another. Much of the current buzz focuses on "server virtualization," which is the ability to allow multiple independent operating systems to run on the same hardware at the same time. Products from Microsoft and VMware lead in this area.

Although server virtualization can provide tremendous benefits, it's not the only option out there for using virtualization. In this article, I'll provide some details on various approaches to virtualization, along with the pros and cons of each. The goal is to determine the best method for a particular workload.

An overview of virtualization approaches

Figure 1 provides a high level overview of the areas of a standard server stack that can be virtualized. Moving up from the bottom is the hardware layer, followed by the operating system and finally the applications.

Figure 1: The various virtualization layers.

Before we get into further technical details, let's quickly review the key goals of virtualization. The first is to ensure independence and isolation between the applications and operating systems that run on a particular piece of hardware. The second is to provide access to as much of the underlying hardware system as possible. The third is to do all of this while minimizing performance overhead. That's no small set of goals, but it can be done (and in more ways than one). Let's take a look at how.

Hardware-level virtualization and hypervisors

We'll start the bottom of the stack, which is at the hardware level. Theoretically, virtualization platforms that run directly on the base hardware should provide the best performance by minimizing overhead. An example is VMware's ESX Server. ESX Server installs directly on a supported hardware platform and includes a minimal operating system. Administration is performed through a Web-based application that can be accessed remotely using a Web browser.

A hypervisor is a thin layer that runs directly between operating systems and the hardware itself. Again, the goal here is to avoid the overhead related to having a "host" operating system. Microsoft and other vendors will be moving to a hypervisor-based model in future versions of their virtualization platforms.

Although the low-level approach might seem ideal, it has some drawbacks. First and foremost is the problem of device compatibility. In order for the platform to work at all, it must support all of the devices that are connected to the main computer. Currently, products such as ESX Server are limited to running only on approved hardware platforms. Although many popular server platforms are supported, this clearly is not as compatible as other solutions.

Another issue is manageability. The dedicated virtualization layers must provide some methods for managing virtualization services. There are various approaches, including operating system "hooks" and Web-based administration, but they tend to be more complicated than in other virtualization options.

Server-level virtualization

The best known and most readily useful virtualization products are those that operate at the server level. VMware GSX Server and Microsoft Virtual Server 2005 are good examples. These products are installed within a host operating system (such as a supported Linux distribution or the Windows Server platform). In this approach, virtual machines run within a service or application that then communicates with hardware by using the host operating system's device drivers.

Figure 2 shows an example of server virtualization using Microsoft Virtual Server 2005.

Figure 2: An example of a server-level virtualization stack

Server-level virtualization brings ease of administration (since standard management features of the host OS can be used), increased hardware compatibility (through the use of host OS device drivers) and integration with directory services and network security. Whether you're running on a desktop or a server OS, you can be up and running with these platforms within a matter of minutes.

One drawback is that the need for a host OS causes additional overhead. The amount of memory, CPU, disk, network, and other resources used by the host must be subtracted from what would otherwise be available for use by VMs. Generally, the host OS also requires an operating system license. Finally, server-level virtualization solutions are often not as efficient as that of hardware-based virtualization platforms.

Application-level virtualization

In some cases, running multiple independent operating systems is overkill. If you want only to create isolate environments that allow multiple users to concurrently run instances of a few applications, there's no need to create a separate VM for each concurrent user. That's where application-level virtualization comes in.

Application-level virtualization products run on top of a host operating system and place standard applications (such as those included with Microsoft Office) in isolated environments. Each user that accesses the computer gets what appears to be his or her own unique installation of the products. Behind the scenes, file system modifications, registry settings and other details are performed in isolated sandbox environments and appear to be independent for each user. Softricity and SWSoft are two vendors that provide application-level virtualization solutions.

The main benefits of this approach are greatly reduced overhead (since only one full operating system is required) and improved scalability (many users can run applications concurrently on the same server). Generally, only one OS license is required (for the host OS). The drawbacks are that only software settings will be independent. If a user wants to change hardware settings (such as memory or network details) or operating system versions (through patches or updates), those changes will be made for all users on the system.

Thin clients and remote application execution

The idea of "thin-clients" has been around since the days of mainframes (when they were less affectionately known as "dumb terminals"). The idea here is that users can connect remotely to a centralized server using minimal software and hardware on the client side. All applications execute on the server, and only keyboard, video and mouse I/O are transferred over the wire. Solutions from Citrix and Microsoft Windows Server 2003 Terminal Services are examples of this approach.

Selecting the best approach

Now that you know your options, how do you decide which is the best one for a particular virtualization workload?

Table 1 provides some examples of typical workloads. In general, as you move from hardware- to server- to application-level virtualization, you gain scalability at the cost of overall independence. The "best" solution will be based on the specific workload and other related details. The bottom line is that you do have options when deciding to go with virtualization, so be sure to consider them all.


Virtualization recommendation


Data center server consolidation

Hardware-level or


· Performance is a key factor.

· Server applications are typically complex.

Software development and testing environments


· Manageability is a key requirement.

· Users must be able to change hardware settings and OS levels.

Sharing end-user productivity applications

Application-level or

remote application execution

· Scalability is important.

· Applications are less complex.

Table 1: Comparing virtualization approaches for various workload types

Dig Deeper on Small-business infrastructure and operations