Qemu
is a machine emulator that can run operating systems and programs for one machine on a different machine. Mostly it is not used as emulator but as virtualizer in collaboration with KVM kernel components. In that case it utilizes the virtualization technology of the hardware to virtualize guests.
While qemu has a and a to interact with running guests those is rarely used that way for other means than development purposes. Libvirt provides an abstraction from specific versions and hypervisors and encapsulates some workarounds and best practices.
Running Qemu/KVM
While there are much more user friendly and comfortable ways, using the command below is probably the quickest way to see some called Ubuntu moving on screen is directly running it from the netboot iso.
Warning: this is just for illustration - not generally recommended without verifying the checksums; Multipass and UVTool are much better ways to get actual guests easily.
Run:
sudo qemu-system-x86_64 -enable-kvm -cdrom https://archive.www.ii0fi.com/ubuntu/dists/bionic-updates/main/installer-amd64/current/images/netboot/mini.iso
You could download the ISO for faster access at runtime and e.g. add a disk to the same by:
- creating the disk
qemu-img create -f qcow2 disk.qcow 5G
- Using the disk by adding
-drive file=disk.qcow,format=qcow2
Those tools can do much more, as you’ll find in their respective (long) man pages. There also is a vast assortment of auxiliary tools to make them more consumable for specific use-cases and needs - for example for UI driven use through libvirt. But in general - even the tools eventually use that - it comes down to:
qemu-system-x86_64 options image[s]
So take a look at the man page of qemu, qemu-img and the documentation of and see which options are the right one for your needs.
Graphics
Graphics for qemu/kvm always comes in two pieces.
- A
front end
- controlled via the-vga
argument - which is provided to the guest. Usually one ofcirrus
,std
,qxl
,virtio
. The default these days isqxl
which strikes a good balance between guest compatibility and performance. The guest needs a driver for what is selected, which is the most common reason to switch from the default to eithercirrus
(e.g. very old Windows versions) - A
back end
- controlled via the-display
argument - which is what the host uses to actually display the graphical content. That can be an application window viagtk
or avnc
. - In addition one can enable the
-spice
back-end (can be done in addition tovnc
) which can be faster and provides more authentication methods than vnc. - if you want no graphical output at all you can save some memory and cpu cycles by setting
-nographic
If you run with spice
or vnc
you can use native vnc tools or virtualization focused tools like virt-viewer
. More about these in the libvirt section.
All those options above are considered basic usage of graphics. There are advanced options for further needs. Those cases usually differ in their are:
-
Need some 3D acceleration:
-vga virtio
with a local display having a GL context-display gtk,gl=on
; That will use on the host and needs guest drivers for [virt3d] which are common in Linux since but hard to get by for other cases. While not as fast as the next two options, the big benefit is that it can be used without additional hardware and without a proper . -
Need native performance: use PCI passthrough of additional GPUs in the system. You’ll need an IOMMU setup and unbind the cards from the host before you can pass it through like
-device vfio-pci,host=05:00.0,bus=1,addr=00.0,multifunction=on,x-vga=on -device vfio-pci,host=05:00.1,bus=1,addr=00.1
-
Need native performance, but multiple guests per card: Like PCI Passthrough, but using mediated devices to shard a card on the Host into multiple devices and pass those like
-display gtk,gl=on -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/4dd511f6-ec08-11e8-b839-2f163ddee3b3,display=on,rombar=0
. More at and . The sharding of the cards is driver specific and therefore will differ per manufacturer like or .
Especially the advanced cases can get pretty complex, therefore it is recommended to use qemu through libvirt for those cases. Libvirt will take care of all but the host kernel/bios tasks of such configurations.
Upgrading the machine type
If you are unsure what this is, you might consider this as buying (virtual) Hardware of the same spec but a newer release date. You are encouraged in general and might want to update your machine type of an existing defined guests in particular to:
- to pick up latest security fixes and features
- continue using a guest created on a now unsupported release
In general it is recommended to update machine types when upgrading qemu/kvm to a new major version. But this can likely never be an automated task as this change is guest visible. The guest devices might change in appearance, new features will be announced to the guest and so on. Linux is usually very good at tolerating such changes, but it depends so much on the setup and workload of the guest that this has to be evaluated by the owner/admin of the system. Other operating systems where known to often have severe impacts by changing the hardware. Consider a machine type change similar to replacing all devices and firmware of a physical machine to the latest revision - all considerations that apply there apply to evaluating a machine type upgrade as well.
As usual with major configuration changes it is wise to back up your guest definition and disk state to be able to do a rollback just in case. There is no integrated single command to update the machine type via virsh or similar tools. It is a normal part of your machine definition. And therefore updated the same way as most others.
First shutdown your machine and wait until it has reached that state.
virsh shutdown <yourmachine>
# wait
virsh list --inactive
# should now list your machine as "shut off"
Then edit the machine definition and find the type in the type tag at the machine attribute.
virsh edit <yourmachine>
<type arch='x86_64' machine='pc-i440fx-bionic'>hvm</type>
Change this to the value you want. If you need to check what types are available via “-M ?” Note that while providing upstream types as convenience only Ubuntu types are supported. There you can also see what the current default would be. In general it is strongly recommended that you change to newer types if possible to exploit newer features, but also to benefit of bugfixes that only apply to the newer device virtualization.
kvm -M ?
# lists machine types, e.g.
pc-i440fx-xenial Ubuntu 16.04 PC (i440FX + PIIX, 1996) (default)
...
pc-i440fx-bionic Ubuntu 18.04 PC (i440FX + PIIX, 1996) (default)
...
After this you can start your guest again. You can check the current machine type from guest and host depending on your needs.
virsh start <yourmachine>
# check from host, via dumping the active xml definition
virsh dumpxml <yourmachine> | xmllint --xpath "string(//domain/os/type/@machine)" -
# or from the guest via dmidecode (if supported)
sudo dmidecode | grep Product -A 1
Product Name: Standard PC (i440FX + PIIX, 1996)
Version: pc-i440fx-bionic
If you keep non-live definitions around - like xml files - remember to update those as well.
Note
This also is documented along some more constraints and considerations at the Ubuntu Wiki
QEMU usage for microvms
QEMU became another use case being used in a container-like style providing an enhanced isolation compared to containers but being focused on initialization speed.
To achieve that several components have been added:
- the
- alternative simple FW that can boot linux
- qemu build with reduced features matching these use cases called
qemu-system-x86-microvm
For example if you happen to already have a stripped down workload that has all it would execute in an initrd you would run it maybe like the following:
$ sudo qemu-system-x86_64 -M ubuntu-q35 -cpu host -m 1024 -enable-kvm -serial mon:stdio -nographic -display curses -append 'console=ttyS0,115200,8n1' -kernel vmlinuz-5.4.0-21 -initrd /boot/initrd.img-5.4.0-21-workload
To run the same with microvm
, qboot
and the minimized qemu you would do the following
-
run it with with type microvm, so change -M to
-M microvm
-
use the qboot bios, add
-bios /usr/share/qemu/bios-microvm.bin
-
install the feature-minimized qemu-system package, do
$ sudo apt install qemu-system-x86-microvm
An invocation will now look like:
$ sudo qemu-system-x86_64 -M microvm -bios /usr/share/qemu/bios-microvm.bin -cpu host -m 1024 -enable-kvm -serial mon:stdio -nographic -display curses -append ‘console=ttyS0,115200,8n1’ -kernel vmlinuz-5.4.0-21 -initrd /boot/initrd.img-5.4.0-21-workload
That will have cut down the qemu, bios and virtual-hw initialization time down a lot.
You will now - more than you already have before - spend the majority inside the guest which implies that further tuning probably has to go into that kernel and userspace initialization time.
** Note **
For now microvm, the qboot bios and other components of this are rather new upstream and not as verified as many other parts of the virtualization stack. Therefore none of the above is the default. Further being the default would also mean many upgraders would regress finding a qemu that doesn’t have most features they are used to use. Due to that the qemu-system-x86-microvm package is intentionally a strong opt-in conflicting with the normal qemu-system-x86 package.
Last updated 7 days ago. Help improve this document in the forum.