xcp-1.6-updates/xen-4.1.hg

changeset 23278:0aa6bc8f38a9

svm: implement instruction fetch part of DecodeAssist (on #PF/#NPF)

Newer SVM implementations (Bulldozer) copy up to 15 bytes from the
instruction stream into the VMCB when a #PF or #NPF exception is
intercepted. This patch makes use of this information if available.
This saves us from a) traversing the guest's page tables, b) mapping
the guest's memory and c) copy the instructions from there into the
hypervisor's address space.
This speeds up #NPF intercepts quite a lot and avoids cache and TLB
trashing.

Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Keir Fraser <keir@xen.org>
xen-unstable changeset: 23238:60f5df2afcbb
xen-unstable date: Mon Apr 18 13:36:10 2011 +0100


svm: decode-assists feature must depend on nextrip feature.

...since the decode-assist fast paths assume nextrip vmcb field is
valid.

Signed-off-by: Keir Fraser <keir@xen.org>
xen-unstable changeset: 23237:381ab77db71a
xen-unstable date: Mon Apr 18 10:10:02 2011 +0100


svm: implement INVLPG part of DecodeAssist

Newer SVM implementations (Bulldozer) give the desired address on
a INVLPG intercept explicitly in the EXITINFO1 field of the VMCB.
Use this address to avoid a costly instruction fetch and decode
cycle.

Signed-off-by: Andre Przywara <andre.przywara@amd.com>
xen-unstable changeset: 23236:e324c4d1dd6e
xen-unstable date: Mon Apr 18 10:06:37 2011 +0100


svm: implement CR access part of DecodeAssist

Newer SVM implementations (Bulldozer) now give the used general
purpose register on a MOV-CR intercept explictly. This avoids
fetching and decoding the instruction from guest's memory and speeds
up some Windows guest, which exercise CR8 quite often.

Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Keir Fraser <keir@xen.org>
xen-unstable changeset: 23235:2c8ad607ece1
xen-unstable date: Mon Apr 18 10:01:06 2011 +0100


svm: add bit definitions for SVM DecodeAssist

Chapter 15.33 of recent APM Vol.2 manuals describe some additions
to SVM called DecodeAssist. Add the newly added fields to the VMCB
structure and name the associated CPUID bit.

Signed-off-by: Andre Przywara <andre.przywara@amd.com>
xen-unstable changeset: 23234:bf7afd48339a
xen-unstable date: Mon Apr 18 09:49:13 2011 +0100


vmx/hvm: move mov-cr handling functions to generic HVM code

Currently the handling of CR accesses intercepts is done much
differently in SVM and VMX. For future usage move the VMX part
into the generic HVM path and use the exported functions.

Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Keir Fraser <keir@xen.org>
xen-unstable changeset: 23233:1276926e3795
xen-unstable date: Mon Apr 18 09:47:12 2011 +0100
author Andre Przywara <andre.przywara@amd.com>
date Thu Apr 12 09:06:02 2012 +0100 (2012-04-12)
parents 80130491806f
children 7d9df818d302
files xen/arch/x86/hvm/emulate.c xen/arch/x86/hvm/hvm.c xen/arch/x86/hvm/svm/svm.c xen/arch/x86/hvm/vmx/vmx.c xen/arch/x86/traps.c xen/include/asm-x86/hvm/hvm.h xen/include/asm-x86/hvm/support.h xen/include/asm-x86/hvm/svm/svm.h xen/include/asm-x86/hvm/svm/vmcb.h xen/include/asm-x86/hvm/vmx/vmx.h xen/include/asm-x86/processor.h
line diff
     1.1 --- a/xen/arch/x86/hvm/emulate.c	Wed Apr 11 19:41:14 2012 +0100
     1.2 +++ b/xen/arch/x86/hvm/emulate.c	Thu Apr 12 09:06:02 2012 +0100
     1.3 @@ -996,6 +996,8 @@ int hvm_emulate_one(
     1.4  
     1.5      hvmemul_ctxt->insn_buf_eip = regs->eip;
     1.6      hvmemul_ctxt->insn_buf_bytes =
     1.7 +        hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf)
     1.8 +        ? :
     1.9          (hvm_virtual_to_linear_addr(
    1.10              x86_seg_cs, &hvmemul_ctxt->seg_reg[x86_seg_cs],
    1.11              regs->eip, sizeof(hvmemul_ctxt->insn_buf),
     2.1 --- a/xen/arch/x86/hvm/hvm.c	Wed Apr 11 19:41:14 2012 +0100
     2.2 +++ b/xen/arch/x86/hvm/hvm.c	Thu Apr 12 09:06:02 2012 +0100
     2.3 @@ -1306,6 +1306,86 @@ static void hvm_set_uc_mode(struct vcpu 
     2.4          return hvm_funcs.set_uc_mode(v);
     2.5  }
     2.6  
     2.7 +int hvm_mov_to_cr(unsigned int cr, unsigned int gpr)
     2.8 +{
     2.9 +    struct vcpu *curr = current;
    2.10 +    unsigned long val, *reg;
    2.11 +
    2.12 +    if ( (reg = get_x86_gpr(guest_cpu_user_regs(), gpr)) == NULL )
    2.13 +    {
    2.14 +        gdprintk(XENLOG_ERR, "invalid gpr: %u\n", gpr);
    2.15 +        goto exit_and_crash;
    2.16 +    }
    2.17 +
    2.18 +    val = *reg;
    2.19 +    HVMTRACE_LONG_2D(CR_WRITE, cr, TRC_PAR_LONG(val));
    2.20 +    HVM_DBG_LOG(DBG_LEVEL_1, "CR%u, value = %lx", cr, val);
    2.21 +
    2.22 +    switch ( cr )
    2.23 +    {
    2.24 +    case 0:
    2.25 +        return hvm_set_cr0(val);
    2.26 +
    2.27 +    case 3:
    2.28 +        return hvm_set_cr3(val);
    2.29 +
    2.30 +    case 4:
    2.31 +        return hvm_set_cr4(val);
    2.32 +
    2.33 +    case 8:
    2.34 +        vlapic_set_reg(vcpu_vlapic(curr), APIC_TASKPRI, ((val & 0x0f) << 4));
    2.35 +        break;
    2.36 +
    2.37 +    default:
    2.38 +        gdprintk(XENLOG_ERR, "invalid cr: %d\n", cr);
    2.39 +        goto exit_and_crash;
    2.40 +    }
    2.41 +
    2.42 +    return X86EMUL_OKAY;
    2.43 +
    2.44 + exit_and_crash:
    2.45 +    domain_crash(curr->domain);
    2.46 +    return X86EMUL_UNHANDLEABLE;
    2.47 +}
    2.48 +
    2.49 +int hvm_mov_from_cr(unsigned int cr, unsigned int gpr)
    2.50 +{
    2.51 +    struct vcpu *curr = current;
    2.52 +    unsigned long val = 0, *reg;
    2.53 +
    2.54 +    if ( (reg = get_x86_gpr(guest_cpu_user_regs(), gpr)) == NULL )
    2.55 +    {
    2.56 +        gdprintk(XENLOG_ERR, "invalid gpr: %u\n", gpr);
    2.57 +        goto exit_and_crash;
    2.58 +    }
    2.59 +
    2.60 +    switch ( cr )
    2.61 +    {
    2.62 +    case 0:
    2.63 +    case 2:
    2.64 +    case 3:
    2.65 +    case 4:
    2.66 +        val = curr->arch.hvm_vcpu.guest_cr[cr];
    2.67 +        break;
    2.68 +    case 8:
    2.69 +        val = (vlapic_get_reg(vcpu_vlapic(curr), APIC_TASKPRI) & 0xf0) >> 4;
    2.70 +        break;
    2.71 +    default:
    2.72 +        gdprintk(XENLOG_ERR, "invalid cr: %u\n", cr);
    2.73 +        goto exit_and_crash;
    2.74 +    }
    2.75 +
    2.76 +    *reg = val;
    2.77 +    HVMTRACE_LONG_2D(CR_READ, cr, TRC_PAR_LONG(val));
    2.78 +    HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR%u, value = %lx", cr, val);
    2.79 +
    2.80 +    return X86EMUL_OKAY;
    2.81 +
    2.82 + exit_and_crash:
    2.83 +    domain_crash(curr->domain);
    2.84 +    return X86EMUL_UNHANDLEABLE;
    2.85 +}
    2.86 +
    2.87  int hvm_set_cr0(unsigned long value)
    2.88  {
    2.89      struct vcpu *v = current;
     3.1 --- a/xen/arch/x86/hvm/svm/svm.c	Wed Apr 11 19:41:14 2012 +0100
     3.2 +++ b/xen/arch/x86/hvm/svm/svm.c	Thu Apr 12 09:06:02 2012 +0100
     3.3 @@ -603,6 +603,21 @@ static void svm_set_rdtsc_exiting(struct
     3.4      vmcb_set_general1_intercepts(vmcb, general1_intercepts);
     3.5  }
     3.6  
     3.7 +static unsigned int svm_get_insn_bytes(struct vcpu *v, uint8_t *buf)
     3.8 +{
     3.9 +    struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
    3.10 +    unsigned int len = v->arch.hvm_svm.cached_insn_len;
    3.11 +
    3.12 +    if ( len != 0 )
    3.13 +    {
    3.14 +        /* Latch and clear the cached instruction. */
    3.15 +        memcpy(buf, vmcb->guest_ins, 15);
    3.16 +        v->arch.hvm_svm.cached_insn_len = 0;
    3.17 +    }
    3.18 +
    3.19 +    return len;
    3.20 +}
    3.21 +
    3.22  static void svm_init_hypercall_page(struct domain *d, void *hypercall_page)
    3.23  {
    3.24      char *p;
    3.25 @@ -928,11 +943,16 @@ struct hvm_function_table * __init start
    3.26  
    3.27      printk("SVM: Supported advanced features:\n");
    3.28  
    3.29 +    /* DecodeAssists fast paths assume nextrip is valid for fast rIP update. */
    3.30 +    if ( !cpu_has_svm_nrips )
    3.31 +        clear_bit(SVM_FEATURE_DECODEASSISTS, &svm_feature_flags);
    3.32 +
    3.33  #define P(p,s) if ( p ) { printk(" - %s\n", s); printed = 1; }
    3.34      P(cpu_has_svm_npt, "Nested Page Tables (NPT)");
    3.35      P(cpu_has_svm_lbrv, "Last Branch Record (LBR) Virtualisation");
    3.36      P(cpu_has_svm_nrips, "Next-RIP Saved on #VMEXIT");
    3.37      P(cpu_has_svm_cleanbits, "VMCB Clean Bits");
    3.38 +    P(cpu_has_svm_decode, "DecodeAssists");
    3.39      P(cpu_has_pause_filter, "Pause-Intercept Filter");
    3.40  #undef P
    3.41  
    3.42 @@ -1034,6 +1054,22 @@ static void svm_vmexit_do_cpuid(struct c
    3.43      __update_guest_eip(regs, inst_len);
    3.44  }
    3.45  
    3.46 +static void svm_vmexit_do_cr_access(
    3.47 +    struct vmcb_struct *vmcb, struct cpu_user_regs *regs)
    3.48 +{
    3.49 +    int gp, cr, dir, rc;
    3.50 +
    3.51 +    cr = vmcb->exitcode - VMEXIT_CR0_READ;
    3.52 +    dir = (cr > 15);
    3.53 +    cr &= 0xf;
    3.54 +    gp = vmcb->exitinfo1 & 0xf;
    3.55 +
    3.56 +    rc = dir ? hvm_mov_to_cr(cr, gp) : hvm_mov_from_cr(cr, gp);
    3.57 +
    3.58 +    if ( rc == X86EMUL_OKAY )
    3.59 +        __update_guest_eip(regs, vmcb->nextrip - vmcb->rip);
    3.60 +}
    3.61 +
    3.62  static void svm_dr_access(struct vcpu *v, struct cpu_user_regs *regs)
    3.63  {
    3.64      HVMTRACE_0D(DR_WRITE);
    3.65 @@ -1427,7 +1463,8 @@ static struct hvm_function_table __read_
    3.66      .msr_read_intercept   = svm_msr_read_intercept,
    3.67      .msr_write_intercept  = svm_msr_write_intercept,
    3.68      .invlpg_intercept     = svm_invlpg_intercept,
    3.69 -    .set_rdtsc_exiting    = svm_set_rdtsc_exiting
    3.70 +    .set_rdtsc_exiting    = svm_set_rdtsc_exiting,
    3.71 +    .get_insn_bytes       = svm_get_insn_bytes,
    3.72  };
    3.73  
    3.74  asmlinkage void svm_vmexit_handler(struct cpu_user_regs *regs)
    3.75 @@ -1533,7 +1570,12 @@ asmlinkage void svm_vmexit_handler(struc
    3.76                      (unsigned long)regs->ecx, (unsigned long)regs->edx,
    3.77                      (unsigned long)regs->esi, (unsigned long)regs->edi);
    3.78  
    3.79 -        if ( paging_fault(va, regs) )
    3.80 +        if ( cpu_has_svm_decode )
    3.81 +            v->arch.hvm_svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
    3.82 +        rc = paging_fault(va, regs);
    3.83 +        v->arch.hvm_svm.cached_insn_len = 0;
    3.84 +
    3.85 +        if ( rc )
    3.86          {
    3.87              if ( trace_will_trace_event(TRC_SHADOW) )
    3.88                  break;
    3.89 @@ -1615,12 +1657,29 @@ asmlinkage void svm_vmexit_handler(struc
    3.90              int dir = (vmcb->exitinfo1 & 1) ? IOREQ_READ : IOREQ_WRITE;
    3.91              if ( handle_pio(port, bytes, dir) )
    3.92                  __update_guest_eip(regs, vmcb->exitinfo2 - vmcb->rip);
    3.93 -            break;
    3.94          }
    3.95 -        /* fallthrough to emulation if a string instruction */
    3.96 +        else if ( !handle_mmio() )
    3.97 +            hvm_inject_exception(TRAP_gp_fault, 0, 0);
    3.98 +        break;
    3.99 +
   3.100      case VMEXIT_CR0_READ ... VMEXIT_CR15_READ:
   3.101      case VMEXIT_CR0_WRITE ... VMEXIT_CR15_WRITE:
   3.102 +        if ( cpu_has_svm_decode && (vmcb->exitinfo1 & (1ULL << 63)) )
   3.103 +            svm_vmexit_do_cr_access(vmcb, regs);
   3.104 +        else if ( !handle_mmio() ) 
   3.105 +            hvm_inject_exception(TRAP_gp_fault, 0, 0);
   3.106 +        break;
   3.107 +
   3.108      case VMEXIT_INVLPG:
   3.109 +        if ( cpu_has_svm_decode )
   3.110 +        {
   3.111 +            svm_invlpg_intercept(vmcb->exitinfo1);
   3.112 +            __update_guest_eip(regs, vmcb->nextrip - vmcb->rip);
   3.113 +        }
   3.114 +        else if ( !handle_mmio() )
   3.115 +            hvm_inject_exception(TRAP_gp_fault, 0, 0);
   3.116 +        break;
   3.117 +
   3.118      case VMEXIT_INVLPGA:
   3.119          if ( !handle_mmio() )
   3.120              hvm_inject_exception(TRAP_gp_fault, 0, 0);
   3.121 @@ -1680,7 +1739,10 @@ asmlinkage void svm_vmexit_handler(struc
   3.122      case VMEXIT_NPF:
   3.123          perfc_incra(svmexits, VMEXIT_NPF_PERFC);
   3.124          regs->error_code = vmcb->exitinfo1;
   3.125 +        if ( cpu_has_svm_decode )
   3.126 +            v->arch.hvm_svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
   3.127          svm_do_nested_pgfault(vmcb->exitinfo2);
   3.128 +        v->arch.hvm_svm.cached_insn_len = 0;
   3.129          break;
   3.130  
   3.131      case VMEXIT_IRET: {
     4.1 --- a/xen/arch/x86/hvm/vmx/vmx.c	Wed Apr 11 19:41:14 2012 +0100
     4.2 +++ b/xen/arch/x86/hvm/vmx/vmx.c	Thu Apr 12 09:06:02 2012 +0100
     4.3 @@ -1545,182 +1545,42 @@ static void vmx_invlpg_intercept(unsigne
     4.4          vpid_sync_vcpu_gva(curr, vaddr);
     4.5  }
     4.6  
     4.7 -#define CASE_SET_REG(REG, reg)      \
     4.8 -    case VMX_CONTROL_REG_ACCESS_GPR_ ## REG: regs->reg = value; break
     4.9 -#define CASE_GET_REG(REG, reg)      \
    4.10 -    case VMX_CONTROL_REG_ACCESS_GPR_ ## REG: value = regs->reg; break
    4.11 -
    4.12 -#define CASE_EXTEND_SET_REG         \
    4.13 -    CASE_EXTEND_REG(S)
    4.14 -#define CASE_EXTEND_GET_REG         \
    4.15 -    CASE_EXTEND_REG(G)
    4.16 -
    4.17 -#ifdef __i386__
    4.18 -#define CASE_EXTEND_REG(T)
    4.19 -#else
    4.20 -#define CASE_EXTEND_REG(T)          \
    4.21 -    CASE_ ## T ## ET_REG(R8, r8);   \
    4.22 -    CASE_ ## T ## ET_REG(R9, r9);   \
    4.23 -    CASE_ ## T ## ET_REG(R10, r10); \
    4.24 -    CASE_ ## T ## ET_REG(R11, r11); \
    4.25 -    CASE_ ## T ## ET_REG(R12, r12); \
    4.26 -    CASE_ ## T ## ET_REG(R13, r13); \
    4.27 -    CASE_ ## T ## ET_REG(R14, r14); \
    4.28 -    CASE_ ## T ## ET_REG(R15, r15)
    4.29 -#endif
    4.30 -
    4.31 -static int mov_to_cr(int gp, int cr, struct cpu_user_regs *regs)
    4.32 +static int vmx_cr_access(unsigned long exit_qualification)
    4.33  {
    4.34 -    unsigned long value;
    4.35 -    struct vcpu *v = current;
    4.36 -    struct vlapic *vlapic = vcpu_vlapic(v);
    4.37 -    int rc = 0;
    4.38 -    unsigned long old;
    4.39 -
    4.40 -    switch ( gp )
    4.41 -    {
    4.42 -    CASE_GET_REG(EAX, eax);
    4.43 -    CASE_GET_REG(ECX, ecx);
    4.44 -    CASE_GET_REG(EDX, edx);
    4.45 -    CASE_GET_REG(EBX, ebx);
    4.46 -    CASE_GET_REG(EBP, ebp);
    4.47 -    CASE_GET_REG(ESI, esi);
    4.48 -    CASE_GET_REG(EDI, edi);
    4.49 -    CASE_GET_REG(ESP, esp);
    4.50 -    CASE_EXTEND_GET_REG;
    4.51 -    default:
    4.52 -        gdprintk(XENLOG_ERR, "invalid gp: %d\n", gp);
    4.53 -        goto exit_and_crash;
    4.54 -    }
    4.55 -
    4.56 -    HVMTRACE_LONG_2D(CR_WRITE, cr, TRC_PAR_LONG(value));
    4.57 -
    4.58 -    HVM_DBG_LOG(DBG_LEVEL_1, "CR%d, value = %lx", cr, value);
    4.59 -
    4.60 -    switch ( cr )
    4.61 -    {
    4.62 -    case 0:
    4.63 -        old = v->arch.hvm_vcpu.guest_cr[0];
    4.64 -        rc = !hvm_set_cr0(value);
    4.65 -        if (rc)
    4.66 -            hvm_memory_event_cr0(value, old);
    4.67 -        return rc;
    4.68 -
    4.69 -    case 3:
    4.70 -        old = v->arch.hvm_vcpu.guest_cr[3];
    4.71 -        rc = !hvm_set_cr3(value);
    4.72 -        if (rc)
    4.73 -            hvm_memory_event_cr3(value, old);        
    4.74 -        return rc;
    4.75 -
    4.76 -    case 4:
    4.77 -        old = v->arch.hvm_vcpu.guest_cr[4];
    4.78 -        rc = !hvm_set_cr4(value);
    4.79 -        if (rc)
    4.80 -            hvm_memory_event_cr4(value, old);
    4.81 -        return rc; 
    4.82 -
    4.83 -    case 8:
    4.84 -        vlapic_set_reg(vlapic, APIC_TASKPRI, ((value & 0x0F) << 4));
    4.85 -        break;
    4.86 +    struct vcpu *curr = current;
    4.87  
    4.88 -    default:
    4.89 -        gdprintk(XENLOG_ERR, "invalid cr: %d\n", cr);
    4.90 -        goto exit_and_crash;
    4.91 -    }
    4.92 -
    4.93 -    return 1;
    4.94 -
    4.95 - exit_and_crash:
    4.96 -    domain_crash(v->domain);
    4.97 -    return 0;
    4.98 -}
    4.99 -
   4.100 -/*
   4.101 - * Read from control registers. CR0 and CR4 are read from the shadow.
   4.102 - */
   4.103 -static void mov_from_cr(int cr, int gp, struct cpu_user_regs *regs)
   4.104 -{
   4.105 -    unsigned long value = 0;
   4.106 -    struct vcpu *v = current;
   4.107 -    struct vlapic *vlapic = vcpu_vlapic(v);
   4.108 -
   4.109 -    switch ( cr )
   4.110 +    switch ( VMX_CONTROL_REG_ACCESS_TYPE(exit_qualification) )
   4.111      {
   4.112 -    case 3:
   4.113 -        value = (unsigned long)v->arch.hvm_vcpu.guest_cr[3];
   4.114 -        break;
   4.115 -    case 8:
   4.116 -        value = (unsigned long)vlapic_get_reg(vlapic, APIC_TASKPRI);
   4.117 -        value = (value & 0xF0) >> 4;
   4.118 -        break;
   4.119 -    default:
   4.120 -        gdprintk(XENLOG_ERR, "invalid cr: %d\n", cr);
   4.121 -        domain_crash(v->domain);
   4.122 -        break;
   4.123 +    case VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR: {
   4.124 +        unsigned long gp = VMX_CONTROL_REG_ACCESS_GPR(exit_qualification);
   4.125 +        unsigned long cr = VMX_CONTROL_REG_ACCESS_NUM(exit_qualification);
   4.126 +        return hvm_mov_to_cr(cr, gp);
   4.127      }
   4.128 -
   4.129 -    switch ( gp ) {
   4.130 -    CASE_SET_REG(EAX, eax);
   4.131 -    CASE_SET_REG(ECX, ecx);
   4.132 -    CASE_SET_REG(EDX, edx);
   4.133 -    CASE_SET_REG(EBX, ebx);
   4.134 -    CASE_SET_REG(EBP, ebp);
   4.135 -    CASE_SET_REG(ESI, esi);
   4.136 -    CASE_SET_REG(EDI, edi);
   4.137 -    CASE_SET_REG(ESP, esp);
   4.138 -    CASE_EXTEND_SET_REG;
   4.139 -    default:
   4.140 -        printk("invalid gp: %d\n", gp);
   4.141 -        domain_crash(v->domain);
   4.142 -        break;
   4.143 +    case VMX_CONTROL_REG_ACCESS_TYPE_MOV_FROM_CR: {
   4.144 +        unsigned long gp = VMX_CONTROL_REG_ACCESS_GPR(exit_qualification);
   4.145 +        unsigned long cr = VMX_CONTROL_REG_ACCESS_NUM(exit_qualification);
   4.146 +        return hvm_mov_from_cr(cr, gp);
   4.147      }
   4.148 -
   4.149 -    HVMTRACE_LONG_2D(CR_READ, cr, TRC_PAR_LONG(value));
   4.150 -
   4.151 -    HVM_DBG_LOG(DBG_LEVEL_VMMU, "CR%d, value = %lx", cr, value);
   4.152 -}
   4.153 -
   4.154 -static int vmx_cr_access(unsigned long exit_qualification,
   4.155 -                         struct cpu_user_regs *regs)
   4.156 -{
   4.157 -    unsigned int gp, cr;
   4.158 -    unsigned long value;
   4.159 -    struct vcpu *v = current;
   4.160 -
   4.161 -    switch ( exit_qualification & VMX_CONTROL_REG_ACCESS_TYPE )
   4.162 -    {
   4.163 -    case VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR:
   4.164 -        gp = exit_qualification & VMX_CONTROL_REG_ACCESS_GPR;
   4.165 -        cr = exit_qualification & VMX_CONTROL_REG_ACCESS_NUM;
   4.166 -        return mov_to_cr(gp, cr, regs);
   4.167 -    case VMX_CONTROL_REG_ACCESS_TYPE_MOV_FROM_CR:
   4.168 -        gp = exit_qualification & VMX_CONTROL_REG_ACCESS_GPR;
   4.169 -        cr = exit_qualification & VMX_CONTROL_REG_ACCESS_NUM;
   4.170 -        mov_from_cr(cr, gp, regs);
   4.171 -        break;
   4.172 -    case VMX_CONTROL_REG_ACCESS_TYPE_CLTS: 
   4.173 -    {
   4.174 -        unsigned long old = v->arch.hvm_vcpu.guest_cr[0];
   4.175 -        v->arch.hvm_vcpu.guest_cr[0] &= ~X86_CR0_TS;
   4.176 -        vmx_update_guest_cr(v, 0);
   4.177 -
   4.178 -        hvm_memory_event_cr0(v->arch.hvm_vcpu.guest_cr[0], old);
   4.179 -
   4.180 +    case VMX_CONTROL_REG_ACCESS_TYPE_CLTS: {
   4.181 +        unsigned long old = curr->arch.hvm_vcpu.guest_cr[0];
   4.182 +        curr->arch.hvm_vcpu.guest_cr[0] &= ~X86_CR0_TS;
   4.183 +        vmx_update_guest_cr(curr, 0);
   4.184 +        hvm_memory_event_cr0(curr->arch.hvm_vcpu.guest_cr[0], old);
   4.185          HVMTRACE_0D(CLTS);
   4.186          break;
   4.187      }
   4.188 -    case VMX_CONTROL_REG_ACCESS_TYPE_LMSW:
   4.189 -        value = v->arch.hvm_vcpu.guest_cr[0];
   4.190 +    case VMX_CONTROL_REG_ACCESS_TYPE_LMSW: {
   4.191 +        unsigned long value = curr->arch.hvm_vcpu.guest_cr[0];
   4.192          /* LMSW can: (1) set bits 0-3; (2) clear bits 1-3. */
   4.193          value = (value & ~0xe) | ((exit_qualification >> 16) & 0xf);
   4.194          HVMTRACE_LONG_1D(LMSW, value);
   4.195 -        return !hvm_set_cr0(value);
   4.196 +        return hvm_set_cr0(value);
   4.197 +    }
   4.198      default:
   4.199          BUG();
   4.200      }
   4.201  
   4.202 -    return 1;
   4.203 +    return X86EMUL_OKAY;
   4.204  }
   4.205  
   4.206  static const struct lbr_info {
   4.207 @@ -2525,7 +2385,7 @@ asmlinkage void vmx_vmexit_handler(struc
   4.208      case EXIT_REASON_CR_ACCESS:
   4.209      {
   4.210          exit_qualification = __vmread(EXIT_QUALIFICATION);
   4.211 -        if ( vmx_cr_access(exit_qualification, regs) )
   4.212 +        if ( vmx_cr_access(exit_qualification) == X86EMUL_OKAY )
   4.213              update_guest_eip(); /* Safe: MOV Cn, LMSW, CLTS */
   4.214          break;
   4.215      }
     5.1 --- a/xen/arch/x86/traps.c	Wed Apr 11 19:41:14 2012 +0100
     5.2 +++ b/xen/arch/x86/traps.c	Thu Apr 12 09:06:02 2012 +0100
     5.3 @@ -368,6 +368,36 @@ void vcpu_show_execution_state(struct vc
     5.4      vcpu_unpause(v);
     5.5  }
     5.6  
     5.7 +unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg)
     5.8 +{
     5.9 +    void *p;
    5.10 +
    5.11 +    switch ( modrm_reg )
    5.12 +    {
    5.13 +    case  0: p = &regs->eax; break;
    5.14 +    case  1: p = &regs->ecx; break;
    5.15 +    case  2: p = &regs->edx; break;
    5.16 +    case  3: p = &regs->ebx; break;
    5.17 +    case  4: p = &regs->esp; break;
    5.18 +    case  5: p = &regs->ebp; break;
    5.19 +    case  6: p = &regs->esi; break;
    5.20 +    case  7: p = &regs->edi; break;
    5.21 +#if defined(__x86_64__)
    5.22 +    case  8: p = &regs->r8;  break;
    5.23 +    case  9: p = &regs->r9;  break;
    5.24 +    case 10: p = &regs->r10; break;
    5.25 +    case 11: p = &regs->r11; break;
    5.26 +    case 12: p = &regs->r12; break;
    5.27 +    case 13: p = &regs->r13; break;
    5.28 +    case 14: p = &regs->r14; break;
    5.29 +    case 15: p = &regs->r15; break;
    5.30 +#endif
    5.31 +    default: p = NULL; break;
    5.32 +    }
    5.33 +
    5.34 +    return p;
    5.35 +}
    5.36 +
    5.37  static char *trapstr(int trapnr)
    5.38  {
    5.39      static char *strings[] = { 
     6.1 --- a/xen/include/asm-x86/hvm/hvm.h	Wed Apr 11 19:41:14 2012 +0100
     6.2 +++ b/xen/include/asm-x86/hvm/hvm.h	Thu Apr 12 09:06:02 2012 +0100
     6.3 @@ -132,6 +132,9 @@ struct hvm_function_table {
     6.4      int  (*cpu_up)(void);
     6.5      void (*cpu_down)(void);
     6.6  
     6.7 +    /* Copy up to 15 bytes from cached instruction bytes at current rIP. */
     6.8 +    unsigned int (*get_insn_bytes)(struct vcpu *v, uint8_t *buf);
     6.9 +
    6.10      /* Instruction intercepts: non-void return values are X86EMUL codes. */
    6.11      void (*cpuid_intercept)(
    6.12          unsigned int *eax, unsigned int *ebx,
    6.13 @@ -328,6 +331,11 @@ static inline void hvm_cpu_down(void)
    6.14          hvm_funcs.cpu_down();
    6.15  }
    6.16  
    6.17 +static inline unsigned int hvm_get_insn_bytes(struct vcpu *v, uint8_t *buf)
    6.18 +{
    6.19 +    return (hvm_funcs.get_insn_bytes ? hvm_funcs.get_insn_bytes(v, buf) : 0);
    6.20 +}
    6.21 +
    6.22  enum hvm_task_switch_reason { TSW_jmp, TSW_iret, TSW_call_or_int };
    6.23  void hvm_task_switch(
    6.24      uint16_t tss_sel, enum hvm_task_switch_reason taskswitch_reason,
     7.1 --- a/xen/include/asm-x86/hvm/support.h	Wed Apr 11 19:41:14 2012 +0100
     7.2 +++ b/xen/include/asm-x86/hvm/support.h	Thu Apr 12 09:06:02 2012 +0100
     7.3 @@ -137,5 +137,7 @@ int hvm_set_cr3(unsigned long value);
     7.4  int hvm_set_cr4(unsigned long value);
     7.5  int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content);
     7.6  int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content);
     7.7 +int hvm_mov_to_cr(unsigned int cr, unsigned int gpr);
     7.8 +int hvm_mov_from_cr(unsigned int cr, unsigned int gpr);
     7.9  
    7.10  #endif /* __ASM_X86_HVM_SUPPORT_H__ */
     8.1 --- a/xen/include/asm-x86/hvm/svm/svm.h	Wed Apr 11 19:41:14 2012 +0100
     8.2 +++ b/xen/include/asm-x86/hvm/svm/svm.h	Thu Apr 12 09:06:02 2012 +0100
     8.3 @@ -80,6 +80,7 @@ extern u32 svm_feature_flags;
     8.4  #define cpu_has_svm_svml      cpu_has_svm_feature(SVM_FEATURE_SVML)
     8.5  #define cpu_has_svm_nrips     cpu_has_svm_feature(SVM_FEATURE_NRIPS)
     8.6  #define cpu_has_svm_cleanbits cpu_has_svm_feature(SVM_FEATURE_VMCBCLEAN)
     8.7 +#define cpu_has_svm_decode    cpu_has_svm_feature(SVM_FEATURE_DECODEASSISTS)
     8.8  #define cpu_has_pause_filter  cpu_has_svm_feature(SVM_FEATURE_PAUSEFILTER)
     8.9  
    8.10  #endif /* __ASM_X86_HVM_SVM_H__ */
     9.1 --- a/xen/include/asm-x86/hvm/svm/vmcb.h	Wed Apr 11 19:41:14 2012 +0100
     9.2 +++ b/xen/include/asm-x86/hvm/svm/vmcb.h	Thu Apr 12 09:06:02 2012 +0100
     9.3 @@ -432,7 +432,9 @@ struct vmcb_struct {
     9.4      vmcbcleanbits_t cleanbits;  /* offset 0xC0 */
     9.5      u32 res09;                  /* offset 0xC4 */
     9.6      u64 nextrip;                /* offset 0xC8 */
     9.7 -    u64 res10a[102];            /* offset 0xD0 pad to save area */
     9.8 +    u8  guest_ins_len;          /* offset 0xD0 */
     9.9 +    u8  guest_ins[15];          /* offset 0xD1 */
    9.10 +    u64 res10a[100];            /* offset 0xE0 pad to save area */
    9.11  
    9.12      svm_segment_register_t es;  /* offset 1024 - cleanbit 8 */
    9.13      svm_segment_register_t cs;  /* cleanbit 8 */
    9.14 @@ -496,6 +498,9 @@ struct arch_svm_struct {
    9.15      int    launch_core;
    9.16      bool_t vmcb_in_sync;    /* VMCB sync'ed with VMSAVE? */
    9.17  
    9.18 +    /* VMCB has a cached instruction from #PF/#NPF Decode Assist? */
    9.19 +    uint8_t cached_insn_len; /* Zero if no cached instruction. */
    9.20 +
    9.21      /* Upper four bytes are undefined in the VMCB, therefore we can't
    9.22       * use the fields in the VMCB. Write a 64bit value and then read a 64bit
    9.23       * value is fine unless there's a VMRUN/VMEXIT in between which clears
    10.1 --- a/xen/include/asm-x86/hvm/vmx/vmx.h	Wed Apr 11 19:41:14 2012 +0100
    10.2 +++ b/xen/include/asm-x86/hvm/vmx/vmx.h	Thu Apr 12 09:06:02 2012 +0100
    10.3 @@ -144,31 +144,15 @@ void vmx_update_cpu_exec_control(struct 
    10.4   * Exit Qualifications for MOV for Control Register Access
    10.5   */
    10.6   /* 3:0 - control register number (CRn) */
    10.7 -#define VMX_CONTROL_REG_ACCESS_NUM      0xf
    10.8 +#define VMX_CONTROL_REG_ACCESS_NUM(eq)  ((eq) & 0xf)
    10.9   /* 5:4 - access type (CR write, CR read, CLTS, LMSW) */
   10.10 -#define VMX_CONTROL_REG_ACCESS_TYPE     0x30
   10.11 +#define VMX_CONTROL_REG_ACCESS_TYPE(eq) (((eq) >> 4) & 0x3)
   10.12 +# define VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR   0
   10.13 +# define VMX_CONTROL_REG_ACCESS_TYPE_MOV_FROM_CR 1
   10.14 +# define VMX_CONTROL_REG_ACCESS_TYPE_CLTS        2
   10.15 +# define VMX_CONTROL_REG_ACCESS_TYPE_LMSW        3
   10.16   /* 10:8 - general purpose register operand */
   10.17 -#define VMX_CONTROL_REG_ACCESS_GPR      0xf00
   10.18 -#define VMX_CONTROL_REG_ACCESS_TYPE_MOV_TO_CR   (0 << 4)
   10.19 -#define VMX_CONTROL_REG_ACCESS_TYPE_MOV_FROM_CR (1 << 4)
   10.20 -#define VMX_CONTROL_REG_ACCESS_TYPE_CLTS        (2 << 4)
   10.21 -#define VMX_CONTROL_REG_ACCESS_TYPE_LMSW        (3 << 4)
   10.22 -#define VMX_CONTROL_REG_ACCESS_GPR_EAX  (0 << 8)
   10.23 -#define VMX_CONTROL_REG_ACCESS_GPR_ECX  (1 << 8)
   10.24 -#define VMX_CONTROL_REG_ACCESS_GPR_EDX  (2 << 8)
   10.25 -#define VMX_CONTROL_REG_ACCESS_GPR_EBX  (3 << 8)
   10.26 -#define VMX_CONTROL_REG_ACCESS_GPR_ESP  (4 << 8)
   10.27 -#define VMX_CONTROL_REG_ACCESS_GPR_EBP  (5 << 8)
   10.28 -#define VMX_CONTROL_REG_ACCESS_GPR_ESI  (6 << 8)
   10.29 -#define VMX_CONTROL_REG_ACCESS_GPR_EDI  (7 << 8)
   10.30 -#define VMX_CONTROL_REG_ACCESS_GPR_R8   (8 << 8)
   10.31 -#define VMX_CONTROL_REG_ACCESS_GPR_R9   (9 << 8)
   10.32 -#define VMX_CONTROL_REG_ACCESS_GPR_R10  (10 << 8)
   10.33 -#define VMX_CONTROL_REG_ACCESS_GPR_R11  (11 << 8)
   10.34 -#define VMX_CONTROL_REG_ACCESS_GPR_R12  (12 << 8)
   10.35 -#define VMX_CONTROL_REG_ACCESS_GPR_R13  (13 << 8)
   10.36 -#define VMX_CONTROL_REG_ACCESS_GPR_R14  (14 << 8)
   10.37 -#define VMX_CONTROL_REG_ACCESS_GPR_R15  (15 << 8)
   10.38 +#define VMX_CONTROL_REG_ACCESS_GPR(eq)  (((eq) >> 8) & 0xf)
   10.39  
   10.40  /*
   10.41   * Access Rights
    11.1 --- a/xen/include/asm-x86/processor.h	Wed Apr 11 19:41:14 2012 +0100
    11.2 +++ b/xen/include/asm-x86/processor.h	Thu Apr 12 09:06:02 2012 +0100
    11.3 @@ -593,6 +593,8 @@ int wrmsr_hypervisor_regs(uint32_t idx, 
    11.4  int microcode_update(XEN_GUEST_HANDLE(const_void), unsigned long len);
    11.5  int microcode_resume_cpu(int cpu);
    11.6  
    11.7 +unsigned long *get_x86_gpr(struct cpu_user_regs *regs, unsigned int modrm_reg);
    11.8 +
    11.9  #endif /* !__ASSEMBLY__ */
   11.10  
   11.11  #endif /* __ASM_X86_PROCESSOR_H */