diff options
author | Sean Christopherson <sean.j.christopherson@intel.com> | 2018-09-26 09:23:53 -0700 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2018-10-17 00:29:57 +0200 |
commit | cb61de2f4819b248dfb3f1380e046ab1276c5227 (patch) | |
tree | 2636dff821e48c38e1ebffb9c890857bc733946c /arch/x86/kvm/vmx.c | |
parent | 16fb9a46c54d1617dbc9b6baf0538abf9c004421 (diff) | |
download | linux-cb61de2f4819b248dfb3f1380e046ab1276c5227.tar.bz2 |
KVM: nVMX: do not skip VMEnter instruction that succeeds
A successful VMEnter is essentially a fancy indirect branch that
pulls the target RIP from the VMCS. Skipping the instruction is
unnecessary (RIP will get overwritten by the VMExit handler) and
is problematic because it can incorrectly suppress a #DB due to
EFLAGS.TF when a VMFail is detected by hardware (happens after we
skip the instruction).
Now that vmx_nested_run() is not prematurely skipping the instr,
use the full kvm_skip_emulated_instruction() in the VMFail path
of nested_vmx_vmexit(). We also need to explicitly update the
GUEST_INTERRUPTIBILITY_INFO when loading vmcs12 host state.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86/kvm/vmx.c')
-rw-r--r-- | arch/x86/kvm/vmx.c | 21 |
1 files changed, 5 insertions, 16 deletions
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 57379c88fcbd..cdc4367a554e 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -12856,15 +12856,6 @@ static int nested_vmx_run(struct kvm_vcpu *vcpu, bool launch) } /* - * After this point, the trap flag no longer triggers a singlestep trap - * on the vm entry instructions; don't call kvm_skip_emulated_instruction. - * This is not 100% correct; for performance reasons, we delegate most - * of the checks on host state to the processor. If those fail, - * the singlestep trap is missed. - */ - skip_emulated_instruction(vcpu); - - /* * We're finally done with prerequisite checking, and can start with * the nested entry. */ @@ -13243,6 +13234,8 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu, kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->host_rsp); kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->host_rip); vmx_set_rflags(vcpu, X86_EFLAGS_FIXED); + vmx_set_interrupt_shadow(vcpu, 0); + /* * Note that calling vmx_set_cr0 is important, even if cr0 hasn't * actually changed, because vmx_set_cr0 refers to efer set above. @@ -13636,10 +13629,12 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, * in L1 which thinks it just finished a VMLAUNCH or * VMRESUME instruction, so we need to set the failure * flag and the VM-instruction error field of the VMCS - * accordingly. + * accordingly, and skip the emulated instruction. */ nested_vmx_failValid(vcpu, VMXERR_ENTRY_INVALID_CONTROL_FIELD); + kvm_skip_emulated_instruction(vcpu); + /* * Restore L1's host state to KVM's software model. We're here * because a consistency check was caught by hardware, which @@ -13648,12 +13643,6 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason, */ nested_vmx_restore_host_state(vcpu); - /* - * The emulated instruction was already skipped in - * nested_vmx_run, but the updated RIP was never - * written back to the vmcs01. - */ - skip_emulated_instruction(vcpu); vmx->fail = 0; } |