NOTE: this document will be merged into pvh.markdown once PVH is replaced with the HVMlite implementation.

x86/HVM direct boot ABI

Since the Xen entry point into the kernel can be different from the native entry point, a ELFNOTE is used in order to tell the domain builder how to load and jump into the kernel entry point:

ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY,          .long,  xen_start32)

The presence of the XEN_ELFNOTE_PHYS32_ENTRY note indicates that the kernel supports the boot ABI described in this document.

The domain builder must load the kernel into the guest memory space and jump into the entry point defined at XEN_ELFNOTE_PHYS32_ENTRY with the following machine state:

All other processor registers and flag bits are unspecified. The OS is in charge of setting up it's own stack, GDT and IDT.

The format of the boot start info structure is the following (pointed to be %ebx):

struct hvm_start_info {
#define HVM_START_MAGIC_VALUE 0x336ec578
    uint32_t magic;             /* Contains the magic value 0x336ec578       */
                                /* ("xEn3" with the 0x80 bit of the "E" set).*/
    uint32_t flags;             /* SIF_xxx flags.                            */
    uint32_t cmdline_paddr;     /* Physical address of the command line.     */
    uint32_t nr_modules;        /* Number of modules passed to the kernel.   */
    uint32_t modlist_paddr;     /* Physical address of an array of           */
                                /* hvm_modlist_entry.                        */

struct hvm_modlist_entry {
    uint32_t paddr;             /* Physical address of the module.           */
    uint32_t size;              /* Size of the module in bytes.              */

Other relevant information needed in order to boot a guest kernel (console page address, xenstore event channel...) can be obtained using HVMPARAMS, just like it's done on HVM guests.

The setup of the hypercall page is also performed in the same way as HVM guests, using the hypervisor cpuid leaves and msr ranges.

AP startup

AP startup is performed using hypercalls. The following VCPU operations are used in order to bring up secondary vCPUs: