1. 13 12月, 2013 2 次提交
  2. 12 12月, 2013 2 次提交
  3. 20 11月, 2013 1 次提交
  4. 14 11月, 2013 1 次提交
  5. 07 11月, 2013 1 次提交
  6. 06 11月, 2013 1 次提交
  7. 05 11月, 2013 3 次提交
  8. 03 11月, 2013 1 次提交
    • P
      KVM: x86: fix emulation of "movzbl %bpl, %eax" · daf72722
      Paolo Bonzini 提交于
      When I was looking at RHEL5.9's failure to start with
      unrestricted_guest=0/emulate_invalid_guest_state=1, I got it working with a
      slightly older tree than kvm.git.  I now debugged the remaining failure,
      which was introduced by commit 660696d1 (KVM: X86 emulator: fix
      source operand decoding for 8bit mov[zs]x instructions, 2013-04-24)
      introduced a similar mis-emulation to the one in commit 8acb4207 (KVM:
      fix sil/dil/bpl/spl in the mod/rm fields, 2013-05-30).  The incorrect
      decoding occurs in 8-bit movzx/movsx instructions whose 8-bit operand
      is sil/dil/bpl/spl.
      
      Needless to say, "movzbl %bpl, %eax" does occur in RHEL5.9's decompression
      prolog, just a handful of instructions before finally giving control to
      the decompressed vmlinux and getting out of the invalid guest state.
      
      Because OpMem8 bypasses decode_modrm, the same handling of the REX prefix
      must be applied to OpMem8.
      Reported-by: NMichele Baldessari <michele@redhat.com>
      Cc: stable@vger.kernel.org
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      daf72722
  9. 01 11月, 2013 1 次提交
  10. 31 10月, 2013 11 次提交
  11. 28 10月, 2013 3 次提交
  12. 17 10月, 2013 1 次提交
  13. 15 10月, 2013 1 次提交
  14. 11 10月, 2013 1 次提交
  15. 10 10月, 2013 1 次提交
    • G
      KVM: nVMX: fix shadow on EPT · d0d538b9
      Gleb Natapov 提交于
      72f85795 broke shadow on EPT. This patch reverts it and fixes PAE
      on nEPT (which reverted commit fixed) in other way.
      
      Shadow on EPT is now broken because while L1 builds shadow page table
      for L2 (which is PAE while L2 is in real mode) it never loads L2's
      GUEST_PDPTR[0-3].  They do not need to be loaded because without nested
      virtualization HW does this during guest entry if EPT is disabled,
      but in our case L0 emulates L2's vmentry while EPT is enables, so we
      cannot rely on vmcs12->guest_pdptr[0-3] to contain up-to-date values
      and need to re-read PDPTEs from L2 memory. This is what kvm_set_cr3()
      is doing, but by clearing cache bits during L2 vmentry we drop values
      that kvm_set_cr3() read from memory.
      
      So why the same code does not work for PAE on nEPT? kvm_set_cr3()
      reads pdptes into vcpu->arch.walk_mmu->pdptrs[]. walk_mmu points to
      vcpu->arch.nested_mmu while nested guest is running, but ept_load_pdptrs()
      uses vcpu->arch.mmu which contain incorrect values. Fix that by using
      walk_mmu in ept_(load|save)_pdptrs.
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Tested-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d0d538b9
  16. 03 10月, 2013 7 次提交
  17. 30 9月, 2013 2 次提交
    • P
      KVM: Convert kvm_lock back to non-raw spinlock · 2f303b74
      Paolo Bonzini 提交于
      In commit e935b837 ("KVM: Convert kvm_lock to raw_spinlock"),
      the kvm_lock was made a raw lock.  However, the kvm mmu_shrink()
      function tries to grab the (non-raw) mmu_lock within the scope of
      the raw locked kvm_lock being held.  This leads to the following:
      
      BUG: sleeping function called from invalid context at kernel/rtmutex.c:659
      in_atomic(): 1, irqs_disabled(): 0, pid: 55, name: kswapd0
      Preemption disabled at:[<ffffffffa0376eac>] mmu_shrink+0x5c/0x1b0 [kvm]
      
      Pid: 55, comm: kswapd0 Not tainted 3.4.34_preempt-rt
      Call Trace:
       [<ffffffff8106f2ad>] __might_sleep+0xfd/0x160
       [<ffffffff817d8d64>] rt_spin_lock+0x24/0x50
       [<ffffffffa0376f3c>] mmu_shrink+0xec/0x1b0 [kvm]
       [<ffffffff8111455d>] shrink_slab+0x17d/0x3a0
       [<ffffffff81151f00>] ? mem_cgroup_iter+0x130/0x260
       [<ffffffff8111824a>] balance_pgdat+0x54a/0x730
       [<ffffffff8111fe47>] ? set_pgdat_percpu_threshold+0xa7/0xd0
       [<ffffffff811185bf>] kswapd+0x18f/0x490
       [<ffffffff81070961>] ? get_parent_ip+0x11/0x50
       [<ffffffff81061970>] ? __init_waitqueue_head+0x50/0x50
       [<ffffffff81118430>] ? balance_pgdat+0x730/0x730
       [<ffffffff81060d2b>] kthread+0xdb/0xe0
       [<ffffffff8106e122>] ? finish_task_switch+0x52/0x100
       [<ffffffff817e1e94>] kernel_thread_helper+0x4/0x10
       [<ffffffff81060c50>] ? __init_kthread_worker+0x
      
      After the previous patch, kvm_lock need not be a raw spinlock anymore,
      so change it back.
      Reported-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Cc: kvm@vger.kernel.org
      Cc: gleb@redhat.com
      Cc: jan.kiszka@siemens.com
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2f303b74
    • G
      KVM: nVMX: Do not generate #DF if #PF happens during exception delivery into L2 · feaf0c7d
      Gleb Natapov 提交于
      If #PF happens during delivery of an exception into L2 and L1 also do
      not have the page mapped in its shadow page table then L0 needs to
      generate vmexit to L2 with original event in IDT_VECTORING_INFO, but
      current code combines both exception and generates #DF instead. Fix that
      by providing nVMX specific function to handle page faults during page
      table walk that handles this case correctly.
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      feaf0c7d