Support statement for this release

This document describes the support status and in particular the security support status of the Xen branch within which you find it.

See the bottom of the file for the definitions of the support status levels etc.

1 Release Support

Xen-Version: 4.10
Initial-Release: 2017-12-13
Supported-Until: 2019-06-13
Security-Support-Until: 2020-12-13
Release Notes
RN

2 Feature Support

2.1 Host Architecture

2.1.1 x86-64

Status: Supported

2.1.2 ARM v7 + Virtualization Extensions

Status: Supported

2.1.3 ARM v8

Status: Supported

2.2 Host hardware support

2.2.1 Physical CPU Hotplug

Status, x86: Supported

2.2.2 Physical Memory Hotplug

Status, x86: Supported

2.2.3 Host ACPI (via Domain 0)

Status, x86 PV: Supported
Status, ARM: Experimental

2.2.4 x86/Intel Platform QoS Technologies

Status: Tech Preview

2.2.5 IOMMU

Status, AMD IOMMU: Supported
Status, Intel VT-d: Supported
Status, ARM SMMUv1: Supported
Status, ARM SMMUv2: Supported

2.2.6 ARM/GICv3 ITS

Extension to the GICv3 interrupt controller to support MSI.

Status: Experimental

2.3 Guest Type

2.3.1 x86/PV

Traditional Xen PV guest

No hardware requirements

Status: Supported

2.3.2 x86/HVM

Fully virtualised guest using hardware virtualisation extensions

Requires hardware virtualisation support (Intel VMX / AMD SVM)

Status, domU: Supported

2.3.3 x86/PVH

PVH is a next-generation paravirtualized mode designed to take advantage of hardware virtualization support when possible. During development this was sometimes called HVMLite or PVHv2.

Requires hardware virtualisation support (Intel VMX / AMD SVM)

Status, domU: Supported

2.3.4 ARM

ARM only has one guest type at the moment

Status: Supported

2.4 Toolstack

2.4.1 xl

Status: Supported

2.4.2 Direct-boot kernel image format

Format which the toolstack accepts for direct-boot kernels

Supported, x86: bzImage, ELF
Supported, ARM32: zImage
Supported, ARM64: Image

2.4.3 Dom0 init support for xl

Status, SysV: Supported
Status, systemd: Supported
Status, BSD-style: Supported

2.4.4 JSON output support for xl

Output of information in machine-parseable JSON format

Status: Experimental

2.4.5 Open vSwitch integration for xl

Status, Linux: Supported

2.4.6 Virtual cpu hotplug

Status: Supported

2.4.7 QEMU backend hotplugging for xl

Status: Supported

2.5 Toolstack/3rd party

2.5.1 libvirt driver for xl

Status: Supported, Security support external

2.6 Debugging, analysis, and crash post-mortem

2.6.1 Host serial console

Status, NS16550: Supported
Status, EHCI: Supported
Status, Cadence UART (ARM): Supported
Status, PL011 UART (ARM): Supported
Status, Exynos 4210 UART (ARM): Supported
Status, OMAP UART (ARM): Supported
Status, SCI(F) UART: Supported

2.6.2 Hypervisor ‘debug keys’

These are functions triggered either from the host serial console, or via the xl ‘debug-keys’ command, which cause Xen to dump various hypervisor state to the console.

Status: Supported, not security supported

2.6.3 Hypervisor synchronous console output (sync_console)

Xen command-line flag to force synchronous console output.

Status: Supported, not security supported

Useful for debugging, but not suitable for production environments due to incurred overhead.

2.6.4 gdbsx

Status, x86: Supported, not security supported

Debugger to debug ELF guests

2.6.5 Soft-reset for PV guests

Soft-reset allows a new kernel to start ‘from scratch’ with a fresh VM state, but with all the memory from the previous state of the VM intact. This is primarily designed to allow “crash kernels”, which can do core dumps of memory to help with debugging in the event of a crash.

Status: Supported

2.6.6 xentrace

Tool to capture Xen trace buffer data

Status, x86: Supported

2.6.7 gcov

Export hypervisor coverage data suitable for analysis by gcov or lcov.

Status: Supported, Not security supported

2.7 Memory Management

2.7.1 Dynamic memory control

Allows a guest to add or remove memory after boot-time. This is typically done by a guest kernel agent known as a “balloon driver”.

Status: Supported

2.7.2 Populate-on-demand memory

This is a mechanism that allows normal operating systems with only a balloon driver to boot with memory < maxmem.

Status, x86 HVM: Supported

2.7.3 Memory Sharing

Allow sharing of identical pages between guests

Status, x86 HVM: Expermental

2.7.4 Memory Paging

Allow pages belonging to guests to be paged to disk

Status, x86 HVM: Experimenal

2.7.5 Transcendent Memory

Transcendent Memory (tmem) allows the creation of hypervisor memory pools which guests can use to store memory rather than caching in its own memory or swapping to disk. Having these in the hypervisor can allow more efficient aggregate use of memory across VMs.

Status: Experimental

2.7.6 Alternative p2m

Allows external monitoring of hypervisor memory by maintaining multiple physical to machine (p2m) memory mappings.

Status, x86 HVM: Tech Preview
Status, ARM: Tech Preview

2.8 Resource Management

2.8.1 CPU Pools

Groups physical cpus into distinct groups called “cpupools”, with each pool having the capability of using different schedulers and scheduling properties.

Status: Supported

2.8.2 Credit Scheduler

A weighted proportional fair share virtual CPU scheduler. This is the default scheduler.

Status: Supported

2.8.3 Credit2 Scheduler

A general purpose scheduler for Xen, designed with particular focus on fairness, responsiveness, and scalability

Status: Supported

2.8.4 RTDS based Scheduler

A soft real-time CPU scheduler built to provide guaranteed CPU capacity to guest VMs on SMP hosts

Status: Experimental

2.8.5 ARINC653 Scheduler

A periodically repeating fixed timeslice scheduler.

Status: Supported

Currently only single-vcpu domains are supported.

2.8.6 Null Scheduler

A very simple, very static scheduling policy that always schedules the same vCPU(s) on the same pCPU(s). It is designed for maximum determinism and minimum overhead on embedded platforms.

Status: Experimental

2.8.7 NUMA scheduler affinity

Enables NUMA aware scheduling in Xen

Status, x86: Supported

2.9 Scalability

2.9.1 Super page support

NB that this refers to the ability of guests to have higher-level page table entries point directly to memory, improving TLB performance. On ARM, and on x86 in HAP mode, the guest has whatever support is enabled by the hardware.

This feature is independent of the ARM “page granularity” feature (see below).

Status, x86 HVM/PVH, HAP: Supported
Status, x86 HVM/PVH, Shadow, 2MiB: Supported
Status, ARM: Supported

On x86 in shadow mode, only 2MiB (L2) superpages are available; furthermore, they do not have the performance characteristics of hardware superpages.

2.9.2 x86/PVHVM

This is a useful label for a set of hypervisor features which add paravirtualized functionality to HVM guests for improved performance and scalability. This includes exposing event channels to HVM guests.

Status: Supported

2.10 High Availability and Fault Tolerance

2.10.1 Remus Fault Tolerance

Status: Experimental

2.10.2 COLO Manager

Status: Experimental

2.10.3 x86/vMCE

Forward Machine Check Exceptions to appropriate guests

Status: Supported

2.11 Virtual driver support, guest side

2.11.1 Blkfront

Guest-side driver capable of speaking the Xen PV block protocol

Status, Linux: Supported
Status, FreeBSD: Supported, Security support external
Status, NetBSD: Supported, Security support external
Status, OpenBSD: Supported, Security support external
Status, Windows: Supported

2.11.2 Netfront

Guest-side driver capable of speaking the Xen PV networking protocol

Status, Linux: Supported
Status, FreeBSD: Supported, Security support external
Status, NetBSD: Supported, Security support external
Status, OpenBSD: Supported, Security support external
Status, Windows: Supported

2.11.3 PV Framebuffer (frontend)

Guest-side driver capable of speaking the Xen PV Framebuffer protocol

Status, Linux (xen-fbfront): Supported

2.11.4 PV Console (frontend)

Guest-side driver capable of speaking the Xen PV console protocol

Status, Linux (hvc_xen): Supported
Status, FreeBSD: Supported, Security support external
Status, NetBSD: Supported, Security support external
Status, Windows: Supported

2.11.5 PV keyboard (frontend)

Guest-side driver capable of speaking the Xen PV keyboard protocol. Note that the “keyboard protocol” includes mouse / pointer support as well.

Status, Linux (xen-kbdfront): Supported

2.11.6 PV USB (frontend)

Status, Linux: Supported

2.11.7 PV SCSI protocol (frontend)

Status, Linux: Supported, with caveats

NB that while the PV SCSI frontend is in Linux and tested regularly, there is currently no xl support.

2.11.8 PV TPM (frontend)

Guest-side driver capable of speaking the Xen PV TPM protocol

Status, Linux (xen-tpmfront): Tech Preview

2.11.9 PV 9pfs frontend

Guest-side driver capable of speaking the Xen 9pfs protocol

Status, Linux: Tech Preview

2.11.10 PVCalls (frontend)

Guest-side driver capable of making pv system calls

Status, Linux: Tech Preview

2.12 Virtual device support, host side

For host-side virtual device support, “Supported” and “Tech preview” include xl/libxl support unless otherwise noted.

2.12.1 Blkback

Host-side implementations of the Xen PV block protocol.

Status, Linux (xen-blkback): Supported
Status, QEMU (xen_disk), raw format: Supported
Status, QEMU (xen_disk), qcow format: Supported
Status, QEMU (xen_disk), qcow2 format: Supported
Status, QEMU (xen_disk), vhd format: Supported
Status, FreeBSD (blkback): Supported, Security support external
Status, NetBSD (xbdback): Supported, security support external
Status, Blktap2, raw format: Deprecated
Status, Blktap2, vhd format: Deprecated

Backends only support raw format unless otherwise specified.

2.12.2 Netback

Host-side implementations of Xen PV network protocol

Status, Linux (xen-netback): Supported
Status, FreeBSD (netback): Supported, Security support external
Status, NetBSD (xennetback): Supported, Security support external

2.12.3 PV Framebuffer (backend)

Host-side implementation of the Xen PV framebuffer protocol

Status, QEMU: Supported

2.12.4 PV Console (xenconsoled)

Host-side implementation of the Xen PV console protocol

Status: Supported

2.12.5 PV keyboard (backend)

Host-side implementation of the Xen PV keyboard protocol. Note that the “keyboard protocol” includes mouse / pointer support as well.

Status, QEMU: Supported

2.12.6 PV USB (backend)

Host-side implementation of the Xen PV USB protocol

Status, QEMU: Supported

2.12.7 PV SCSI protocol (backend)

Status, Linux: Experimental

NB that while the PV SCSI backend is in Linux and tested regularly, there is currently no xl support.

2.12.8 PV TPM (backend)

Status: Tech Preview

2.12.9 PV 9pfs (backend)

Status, QEMU: Tech Preview

2.12.10 PVCalls (backend)

Status, Linux: Experimental

PVCalls backend has been checked into Linux, but has no xl support.

2.12.11 Online resize of virtual disks

Status: Supported

2.13 Security

2.13.1 Driver Domains

“Driver domains” means allowing non-Domain 0 domains with access to physical devices to act as back-ends.

Status: Supported, with caveats

See the appropriate “Device Passthrough” section for more information about security support.

2.13.2 Device Model Stub Domains

Status: Supported, with caveats

Vulnerabilities of a device model stub domain to a hostile driver domain (either compromised or untrusted) are excluded from security support.

2.13.3 KCONFIG Expert

Status: Experimental

2.13.4 Live Patching

Status, x86: Supported
Status, ARM: Experimental

Compile time disabled for ARM by default.

2.13.5 Virtual Machine Introspection

Status, x86: Supported, not security supported

2.13.6 XSM & FLASK

Status: Experimental

Compile time disabled by default.

Also note that using XSM to delegate various domain control hypercalls to particular other domains, rather than only permitting use by dom0, is also specifically excluded from security support for many hypercalls. Please see XSA-77 for more details.

2.13.7 FLASK default policy

Status: Experimental

The default policy includes FLASK labels and roles for a “typical” Xen-based system with dom0, driver domains, stub domains, domUs, and so on.

2.14 Virtual Hardware, Hypervisor

2.14.1 x86/Nested PV

This means running a Xen hypervisor inside an HVM domain on a Xen system, with support for PV L2 guests only (i.e., hardware virtualization extensions not provided to the guest).

Status, x86 Xen HVM: Tech Preview

This works, but has performance limitations because the L1 dom0 can only access emulated L1 devices.

Xen may also run inside other hypervisors (KVM, Hyper-V, VMWare), but nobody has reported on performance.

2.14.2 x86/Nested HVM

This means providing hardware virtulization support to guest VMs allowing, for instance, a nested Xen to support both PV and HVM guests. It also implies support for other hypervisors, such as KVM, Hyper-V, Bromium, and so on as guests.

Status, x86 HVM: Experimental

2.14.3 vPMU

Virtual Performance Management Unit for HVM guests

Status, x86: Supported, Not security supported

Disabled by default (enable with hypervisor command line option). This feature is not security supported: see http://xenbits.xen.org/xsa/advisory-163.html

2.14.4 x86/PCI Device Passthrough

Status, x86 PV: Supported, with caveats
Status, x86 HVM: Supported, with caveats

Only systems using IOMMUs are supported.

Not compatible with migration, populate-on-demand, altp2m, introspection, memory sharing, or memory paging.

Because of hardware limitations (affecting any operating system or hypervisor), it is generally not safe to use this feature to expose a physical device to completely untrusted guests. However, this feature can still confer significant security benefit when used to remove drivers and backends from domain 0 (i.e., Driver Domains).

2.14.5 ARM/Non-PCI device passthrough

Status: Supported, not security supported

Note that this still requires an IOMMU that covers the DMA of the device to be passed through.

2.14.6 ARM: 16K and 64K page granularity in guests

Status: Supported, with caveats

No support for QEMU backends in a 16K or 64K domain.

2.14.7 ARM: Guest Device Tree support

Status: Supported

2.14.8 ARM: Guest ACPI support

Status: Supported

2.15 Virtual Hardware, QEMU

This section describes supported devices available in HVM mode using a qemu devicemodel (the default).

Status: Support scope restricted 

Note that other devices are available but not security supported.

2.15.1 x86/Emulated platform devices (QEMU):

Status, piix3: Supported

2.15.2 x86/Emulated network (QEMU):

Status, e1000: Supported
Status, rtl8193: Supported
Status, virtio-net: Supported

2.15.3 x86/Emulated storage (QEMU):

Status, piix3 ide: Supported
Status, ahci: Supported

See the section Blkback for image formats supported by QEMU.

2.15.4 x86/Emulated graphics (QEMU):

Status, cirrus-vga: Supported
Status, stdvga: Supported

2.15.5 x86/Emulated audio (QEMU):

Status, sb16: Supported
Status, es1370: Supported
Status, ac97: Supported

2.15.6 x86/Emulated input (QEMU):

Status, usbmouse: Supported
Status, usbtablet: Supported
Status, ps/2 keyboard: Supported
Status, ps/2 mouse: Supported

2.15.7 x86/Emulated serial card (QEMU):

Status, UART 16550A: Supported

2.15.8 x86/Host USB passthrough (QEMU):

Status: Supported, not security supported

2.15.9 qemu-xen-traditional

The Xen Project provides an old version of qemu with modifications which enable use as a device model stub domain. The old version is normally selected by default only in a stub dm configuration, but it can be requested explicitly in other configurations, for example in xl with device_model_version="QEMU_XEN_TRADITIONAL".

Status, Device Model Stub Domains: Supported, with caveats
Status, as host process device model: No security support, not recommended

qemu-xen-traditional is security supported only for those available devices which are supported for mainstream QEMU (see above), with trusted driver domains (see Device Model Stub Domains).

2.16 Virtual Firmware

2.16.1 x86/HVM iPXE

Booting a guest via PXE.

Status: Supported, with caveats

PXE inherently places full trust of the guest in the network, and so should only be used when the guest network is under the same administrative control as the guest itself.

2.16.2 x86/HVM BIOS

Booting a guest via guest BIOS firmware

Status, SeaBIOS (qemu-xen): Supported
Status, ROMBIOS (qemu-xen-traditional): Supported

2.16.3 x86/HVM OVMF

OVMF firmware implements the UEFI boot protocol.

Status, qemu-xen: Supported

3 Format and definitions

This file contains prose, and machine-readable fragments. The data in a machine-readable fragment relate to the section and subsection in which it is found.

The file is in markdown format. The machine-readable fragments are markdown literals containing RFC-822-like (deb822-like) data.

In each case, descriptions which expand on the name of a feature as provided in the section heading, precede the Status indications. Any paragraphs which follow the Status indication are caveats or qualifications of the information provided in Status fields.

3.1 Keys found in the Feature Support subsections

3.1.1 Status

This gives the overall status of the feature, including security support status, functional completeness, etc. Refer to the detailed definitions below.

If support differs based on implementation (for instance, x86 / ARM, Linux / QEMU / FreeBSD), one line for each set of implementations will be listed.

3.2 Definition of Status labels

Each Status value corresponds to levels of security support, testing, stability, etc., as follows:

3.2.1 Experimental

Functional completeness: No
Functional stability: Here be dragons
Interface stability: Not stable
Security supported: No

3.2.2 Tech Preview

Functional completeness: Yes
Functional stability: Quirky
Interface stability: Provisionally stable
Security supported: No

3.2.2.1 Supported

Functional completeness: Yes
Functional stability: Normal
Interface stability: Yes
Security supported: Yes

3.2.2.2 Deprecated

Functional completeness: Yes
Functional stability: Quirky
Interface stability: No (as in, may disappear the next release)
Security supported: Yes

All of these may appear in modified form. There are several interfaces, for instance, which are officially declared as not stable; in such a case this feature may be described as “Stable / Interface not stable”.

3.3 Definition of the status label interpretation tags

3.3.1 Functionally complete

Does it behave like a fully functional feature? Does it work on all expected platforms, or does it only work for a very specific sub-case? Does it have a sensible UI, or do you have to have a deep understanding of the internals to get it to work properly?

3.3.2 Functional stability

What is the risk of it exhibiting bugs?

General answers to the above:

3.3.3 Interface stability

If I build a system based on the current interfaces, will they still work when I upgrade to the next version?

3.3.4 Security supported

Will XSAs be issued if security-related bugs are discovered in the functionality?

If “no”, anyone who finds a security-related bug in the feature will be advised to post it publicly to the Xen Project mailing lists (or contact another security response team, if a relevant one exists).

Bugs found after the end of Security-Support-Until in the Release Support section will receive an XSA if they also affect newer, security-supported, versions of Xen. However, the Xen Project will not provide official fixes for non-security-supported versions.

Three common ‘diversions’ from the ‘Supported’ category are given the following labels:

3.3.5 Interaction with other features

Not all features interact well with all other features. Some features are only for HVM guests; some don’t work with migration, &c.

3.3.6 External security support

The XenProject security team provides security support for XenProject projects.

We also provide security support for Xen-related code in Linux, which is an external project but doesn’t have its own security process.

External projects that provide their own security support for Xen-related features are listed below.