Gaming in a Virtual Machine?

Warning: This guide is obsolete. Please use the Gaming on Linux with a Windows Virtual Machine using OVMF guide instead.

Gaming in a virtual machine?

In case you're wondering, I'm not talking about games like Angry Birds. If someone asked me this question years back, I'd have chuckled. Desktop type virtualization never was a great 3D performer. In contrast with mainstream processors (CPU) which have been enjoying virtualization extensions for some years, graphic cards (GPU) do not share these functionalities. At best, hypervisors will try to leverage the host GPU drivers capabilities to emulate a graphical interface in the guest strong enough for desktop compositing (such as Aero). Beyond this very basic desktop acceleration, we start hitting the limits of these emulated interfaces when trying to launch modern 3D games.

While researching a few years ago, I found out that Xen was capable of passing an entire graphic card to a DomU guest1. Unfortunately, I did not manage to get Xen working with my Nvidia Geforce on Dom0 because the proprietary binary did not play well with the Xen kernel. Fearing it would take a major investment in time, I simply abandoned the idea of doing VGA passthrough with Xen.

Last year, VFIO-VGA (or VFIO-PCI-VGA) was added to the version 3.9 of the Linux kernel2. Since KVM is already part of the kernel for most distributions, I knew it would not cause me problems with the proprietary binaries from Nvidia. In fact, I already used KVM/QEMU as a replacement for Virtualbox when fancy graphics were not required. I simply took my old Radeon HD 5850 to use for my gaming VM experiment.

This article was originally written back in January. I reworked the structure and added some modifications in order to make it a bit more accessible. I publish this now because I would like to do a follow-up in the coming weeks in order to show the rapid evolution of KVM.

A picture is worth a thousand words :

kvm_3dmark06

Image - VM performance on 3D Mark 2006

Word of Caution

Since this is still in active development and patches are sent upstream frequently, it would not be ideal to deploy VFIO-VGA into a production environment without a decent grasp of the subject. Also, a good understanding of hypervisors will certainly help.

To ensure stability and performance, best practice is to follow upstream for new releases. Also, many patches need to be applied depending on the hardware configuration since not everything is being accepted into the kernel. This means that you have to know how to compile the Linux kernel and this is outside of the scope of this article.

Each Linux distribution has its shares of differences. However, it is possible to obtain a working VM with VGA passthrough on any distribution. There are people who had success with Arch Linux, Fedora, Debian and Ubuntu to name a few. Finally, if you are not comfortable with a command line interface, you should start elsewhere.

Preparation

Hardware

Some hardware is required to do this kind of virtualization. For VGA passthrough, your system needs to support IOMMU3.

Historically, before memory controllers and other parts of the Northbridge got integrated into the CPU, you needed to check for both the CPU and the chipset to support the virtualization extensions. VT-d and AMD-Vi are now mostly integrated into the CPU so only BIOS (or UEFI) support is needed.

CPU

For Intel processors, you need VT-x and VT-d virtualization extensions4 (verify here). If you have a K processor (like the Core i7 4770K for example), VT-d will most likely not be supported and you can't do VGA passthrough without it.

For AMD processors, you need AMD-V and AMD-Vi extensions. Sadly, AMD does not have a comprehensive database of all their processors like Intel does, but most recent processors since Phenom II should have these extensions.

Motherboard (Chipset and BIOS/UEFI)

The motherboard also has to support VT-x/AMD-V and VT-d/AMD-Vi. You need to look into the documentation of your motherboard to verify. Most manufacturers support VT-x/AMD-V just fine but documentation abut VT-d/AMD-Vi is lacking.

Most boards from Gigabyte and Asrock will support VT-d/AMD-Vi virtualization extensions. For other manufacturers like Asus and MSI, it is poorly documented.

Video Card

Ideally, you need two video cards in your system. One for the host and another for the guest. Keep in mind that a GPU passed to a VM cannot be used by the host. If you only have one video card, you need to unbind the GPU from the host before passing it to the guest. This means that the host becomes only accessible via SSH. The end result would be very similar to a Dom0 on Xen.

For AMD/ATI video cards, a Radeon HD 3000 series or more recent is recommended. For Nvidia cards, a Geforce from 8000 series or more recent is recommended. Firepro and Quadro equivalents are also reported to work.

The video output for the guest is going to be sent to the video card ports when the VM boots. So you need to hook the card to a monitor or something.

VM Control (video, keyboard, mouse)

Spice, RDP, VNC or NX can be used to control the VM. However, these solutions do not provide a decent experience when gaming in high resolution with high framerate.

To ease installation of a guest operating system, it is possible to use -vga qxl which will redirect the VGA output of the VM into a QEMU window. This will resemble VMware Player.

However, when playing or when needing high framerate, -vga qxl should not be used. In this case, an alternative method needs to be setup to control the guest. Synergy can be used to share these functionalities between the host and the guest. Another method could be to pass a dedicated USB peripheral or a KVM switchbox (like this one) to change focus on either the guest or the host.

Software

To allow VGA passthrough, we will be using KVM and QEMU. Optionally, you can use Libvirt to manage easily multiple guests on one host. It is also possible to use Virtio to increase performances of the guest network interface and hard drive controller.

Linux Kernel

You also need the version 3.9 or a more recent version of the Linux kernel. VFIO-VGA was introduced in 3.9. I personally had success since 3.11. You can use the kernel provided by your Linux distribution but there are chances that you will need to recompile it with the following options:

CONFIG_VFIO_IOMMU_TYPE1
CONFIG_VFIO
CONFIG_VFIO_PCI
CONFIG_VFIO_PCI_VGA

CONFIG_VIRTIO_PCI
CONFIG_VIRTIO_BALLOON
CONFIG_VIRTIO_MMIO
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES

Also, for the host to work correctly when a VM is started, a VGA arbiter patchi is required. In my case, I needed this patch to make my Geforce work correctly.

QEMU (and SeaBIOS)

To avoid restarting or suspending and waking up the host when a VM is closed, then you should at least use QEMU 1.7 with the VGA RESET patch. If it's not applied, then you might experience host crash while restarting a guest because the GPU was not reset correctly. VGA RESET is integrated into QEMU since version 2.0. I compile QEMU from this git: https://github.com/awilliam/qemu-vfio.

SeaBIOS was integrated into QEMU since version 1.7. There is now no need to compile it on its own.

Configuration

Modules

When the previous the softwares are compiled and installed, you need to configure the host to allow VFIO-PCI and VFIO-PCI-VGA to work with your hardware. There is a very comprehensive topic on Arch Linux forums. I suggest you take a good look at it.

On my system, I blacklisted the radeon module to prevent my second video card to be used by the host. You just need to add or create a file in the /etc/modprobe.d/ directory.

# /etc/modprobe.d/blacklist.conf
blacklist radeon

I then created a file to ensure that KVM and VFIO loaded with some parameters.

# /etc/modprobe.d/kvm.conf
options kvm ignore_msrs=1
options kvm_intel emulate_invalid_guest_state=0

# If your motherboard does not support interrupt remapping
# Watch your logs closely with this option
options vfio_iommu_type1 allow_unsafe_interrupts=1

Finally, I start IOMMU by appending to the Linux command line in /etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash iommu=on intel_iommu=on"

After modifying GRUB configuration files, you need to update the bootloader. On Debian and Ubuntu, the update-initramfs and update-grub scripts do it automatically. After a reboot, you can verify that IOMMU is working correctly by checking the Kernel logs.

$ dmesg | grep -i pci-dma
[    0.923886] PCI-DMA: Intel(R) Virtualization Technology for Directed I/O

VFIO

I use this script placed in /usr/bin/ to attach my peripherals to the vfio-pci module:

#!/bin/bash
#
# /usr/bin/vfio-bind
# Attach devices to vfio-pci

modprobe vfio-pci

for dev in "$@"; do
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done

To ease the experience, testing can be done as superuser (root). If you don't use root, then sudo will be necessary for most commands.

You can call the vfio-bind script with the right peripherals ID by verifying with: lspci | grep -i radeon.

So to bind my video card (IDs are 02:00.0 and 02:00.1) I just pass the parameters to the script.

$ vfio-bind 0000:02:00.0 0000:02:00.1

If you are not having segfaults at this point, then everything is going well!

Virtual Machine

To control my VM, I use a second set of keyboard and mouse that are USB connected to the host. I note the IDs of these peripherals:

$ lsusb | grep -i microsoft
Bus 002 Device 005: ID 093a:2510 Microsoft Corp. Wired Mouse 600
Bus 002 Device 003: ID 045e:0750 Microsoft Corp. Wired Keyboard 600

Then, when calling qemu-system-x86_64 --enable-kvm, you just add the desired parameters. To pass a dedicated GPU to the VM, you at least need -vga noneand this-device`:

-vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \

Then, you can simply attach the actual GPU with the right ID:

-device vfio-pci,host=$YOUR_GPU,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=$GPU_AUDIO,bus=root.1, addr=00.1

To share the audio from your guest to the host, you can also emulate an audio adapter:

-device ich9-intel-hda,bus=pcie.0,addr=1b.0,id=sound0 \
-device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0

I made a script to simplify the command, you can modify it and use it as you wish:

#!/bin/bash
###############################################################################
# Settings
###############################################################################
# VM Name
NAME="Windows"

# BIOS
SYSTEM_BIOS="/usr/share/qemu/bios.bin"

# CPU
CPU="host"
CORES="2"

# RAM
RAM="4096"

# Passthrough devices
GPU_RADEON="02:00.0"
GPU_AUDIO="02:00.1"
GPU_BIOS="/var/tmp/kvm/vgabios-gigabyte-hd5850-1024m.rom"

# USB Keyboard
USB_KBD="045e:0750"

# USB Mouse
USB_MOU="093a:2510"

# Networking
MAC_ADDR="ENTER-A-MAC-ADDR"

# Hard Drive
HD_PATH="/var/tmp/kvm/windows/windows.img"  

# CD ROMS
CD_PATH_WIN="PATH-TO-YOUR-OS-ISO"
CD_PATH_VIRTIO="/var/tmp/kvm/Downloads/kvirtio-win-0.1-65.iso"

# Execute
###############################################################################
qemu-system-x86_64 --enable-kvm \
-M q35 -m $RAM -cpu $CPU,hv-time -name $NAME \
-smp $(($CORES*2)),sockets=1,cores=$CORES,threads=1 \
-bios $BIOS -vga none -rtc base=localtime,clock=host \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
-device piix4-ide,bus=pcie.0,id=ide \
-device vfio-pci,host=$GPU_RADEON,bus=root.1,addr=00.0,multifunction=on,x-vga=on,rombar=0,romfile=$GPU_BIOS \
-device vfio-pci,host=$GPU_AUDIO,bus=root.1, addr=00.1 \
-device ich9-intel-hda,bus=pcie.0,addr=1b.0,id=sound0 \
-device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 \
-usb -usbdevice host:$USB_KBD -usbdevice host:$USB_MOU \
-net nic,macaddr=$MAC_ADDR,model=virtio -net tap,ifname=tap0,script=no,downscript=no \
-drive file=$HD_PATH,id=disk,if=virtio \
-drive file=$CD_PATH_WIN,id=wincd -device ide-cd,bus=ido.0,drive=wincd \
-drive file=$CD_PATH_VIRTIO,id=isocd -device ide-cd,bus=ide.1,drive=isocd \
-boot order=dc, menu=on
###############################################################################

If everything works correctly, the VM will start and you will see video from your second video card. You can replace replace -vga none by -vga qxl when you need to easily share your keyboard and mouse with the host.

Observations

I was very impressed by how stable both the host and the guest were. This setup allows me to receive my emails and keep all my work open and accessible when the VM is running. I have not experienced crashes and I allowed the guest to run for a week straight without any reboots.

You can see it in action:

Video - Metro 2033 with KVM/QEMU

After installing the Catalyst drivers and the DirectX 9.0c, it is possible to play almost any game. The most recent I tried were mainly RTS games such as Planetary Annihilation and Wargame Red Dragon. At the time, both games were in beta but were working flawlessly.

I however observed some performance problems with games using the Unreal engine, XCOM : Enemy Unknown and Borderlands for example. These games use debug registers on the CPU and KVM wastes real CPU cycles with it. This has a major impact on performance when happening. I read that a patch is going upstream to address the issue.

Conclusion

To conclude, I think that within a few months, I will not need a dedicated NTFS partition for Windows on my hard drive anymore. The performances were extremely satisfactory and compared to a dual boot, the VM allows me to continue working in parallel. There is also no risk of Windows writing over the boot loader or any other partitions. VIRTIO gives adequate I/O throughput for the network interface and the hard drive controller.

I see this technology as very interesting for the following reasons:

  1. Maximization of performances – It is possible to maximize performances for 3D games and demanding software like CAD, Adobe Premiere, etc.
  2. Libvirt and Virsh integration – This allows easy management of multiple KVM virtual machines within the same host. It is ideal for quick test environments and to aid in development.
  3. FOSS – We can execute, study, modify and distribute the code freely.

There are some caveats at the moment:

  1. Unstable / still in active development – There are frequent updates to QEMU and KVM for these functionalities. It is therefore not yet ready for deployment in production environment.
  2. Relatively complex – Compared to Virtualbox, VGA passthrough with KVM is much more complex. You need a good understanding of the Linux environment and nobody can just pop-up a working virtual machine with simple pointy-clicky wizards.

Additional references: