« Back to Resources

OpenStack’s Role in the Life of a Hypervisor

Whether deployed as a public cloud service or within private data centers, OpenStack leverages an underlying virtualization technology to deliver enterprise-level capabilities in terms of uptime, resource efficiency, optimization and scalability. The most common OpenStack hypervisor, KVM, is a critical yet dispensable part of the OpenStack framework.

Read on to find out more about hypervisors, how they work for OpenStack, and how to choose the right right one for your business.

Overview of Hypervisors

hypervisor is the software component that enables virtual machines (VMs) to run on host machines by allowing multiple operating systems (OS) to share the same hardware. Operating systems typically use the maximum hardware (memory, processor) that is available in the cloud or host machine if no other OS is running. To prevent this, hypervisors are used to control and allocate a part of the hardware resource to individual OSs, enabling each one to get what it needs without disrupting the operations of others.

In essence, hypervisors are programs that help manage a VM’s access to underlying hardware. This means they virtualize hardware resources and help to create, manage and monitor VMs. There are two types of hypervisor.

Type 1 Hypervisor

Type 1 hypervisors are also referred to as “native” or “bare metal” embedded hypervisors. They run directly on the host system’s hardware to manage guest operating systems and control the hardware. Type 1 hypervisors include VMware ESX/ESXi, Xbox One system software, Microsoft Hyper-V, Oracle VM Server for x86, Oracle VM Server for SPARC, XCP-ng, Xen, and AntsleOs.

Type 2 Hypervisor

Also referred to as “hosted” hypervisors, type 2 hypervisors run within the host OS (as normal computer programs do) and enable a guest OS to run as a process on the host OS. It does this by providing virtualization services, such as memory management and I/O device support, thus abstracting a guest OS from its host OS. Type 2 hypervisors include QEMU, VirtualBox, VMware Player, and VMware Workstation.

OpenStack Hypervisors

Hypervisors and OpenStack work hand-in-hand to create, manage, and monitor virtual machines. Let’s examine hypervisors supported by OpenStack:

  • XenServer. Xen Cloud Platform (XCP), XenServer, and other XAPI-based Xen variants, run Windows and Linux virtual machines. However, the nova-compute service must be installed in a para-virtualized virtual machine.
  • Xen. The Xen Project Hypervisor uses libvirt as a management interface into OpenStack’s nova-compute to run NetBSD, FreeBSD, Windows, and Linux virtual machines.
  • VMware vSphere. vSphere’s 5.1.0 and newer versions run VMware-based Windows and Linux images through a connected vCenter server.
  • UML (User Mode Linux). This is typically used for development purposes.
  • QEMU. This is the Quick EMUlator, and it’s also used for development purposes.
  • Hyper-V. Microsoft Hyper-V runs nova-compute natively on the Windows virtualization platform. It also runs FreeBSD, Linux, and Windows virtual machines.
  • LXC. These are Linux containers that run Linux-based VMs.

According to a recent OpenStack user survey, the most widely adopted hypervisor in the OpenStack community is KVM. So what is KVM and why is it so popular? Let’s take a look.

All About KVM: OpenStack’s Most Widely Used Hypervisor

KVM is a Linux kernel-based Virtual Machine that supports VMware, qcow2, and raw image formats. It inherits the virtual disk formats it supports from QEMU and launches virtual machines using a modified QEMU program.

So what are the benefits of KVM? First off, consistency. Because OpenStack is also a Linux distribution, it’s logical to then use a Linux-based hypervisor.

Additionally, KVM is well-loved by the OpenStack community, so it has a lot of additional features that have been widely tested. That’s probably why nearly 90% of OpenStack deployments use KVM.

Just because it’s the most popular hypervisor doesn’t mean it’s the right one, especially if you’re already investing in a different virtual infrastructure.

Why You Should (or Shouldn’t) Use a Hypervisor on OpenStack

While organizations can certainly run their workloads directly on the hypervisor, OpenStack grants users tons of features that hypervisors don’t have. These features include a seamless UI to access virtually every function needed to efficiently assemble and deploy VMs. With this UI, users can easily build and monitor a robust extensive network and enable advanced features like HADOOP cluster support and data analytics capabilities.

The UI also allows easy configuration of standard features like memory, CPU, storage and others. OpenStack also comes with real-time billing support, enabling users to track core usage, disk usage, memory usage as well as other statistics of every VM created using OpenStack.

Choosing a Hypervisor

Since OpenStack’s Compute (Nova) supports so many hypervisors, it may be difficult for you to choose one. The factors to consider in choosing a hypervisor include your organization’s hypervisor experience, the hypervisor’s documentation, level of community involvement in development, and feature parity.

Hypervisors as Bare Metal Provisioning and Containers

Bare metal provisioning and containers provide enterprises with the workload isolation and flexible configuration that come with cloud environments via orchestrated VMs on server clusters overlaid with hypervisors. Virtual machines, and the hypervisors that support their operations, may soon become obsolete.

The number of VMs and hypervisors on OpenStack deployments is growing due to the rise in the number of workloads in OpenStack shops and, by extension, the size of clusters needed to support them. However, containers are becoming an increasingly popular way to package and deploy software.

An OpenStack user survey shows that up to 25 percent of enterprises now use Ironic bare metal plug-in in production. As container and bare metal environments get more secure and sophisticated, it’s expected that virtualization will no longer be needed.

Download the Whitepaper