1. 17 4月, 2013 1 次提交
    • Y
      KVM: VMX: Enable acknowledge interupt on vmexit · a547c6db
      Yang Zhang 提交于
      The "acknowledge interrupt on exit" feature controls processor behavior
      for external interrupt acknowledgement. When this control is set, the
      processor acknowledges the interrupt controller to acquire the
      interrupt vector on VM exit.
      
      After enabling this feature, an interrupt which arrived when target cpu is
      running in vmx non-root mode will be handled by vmx handler instead of handler
      in idt. Currently, vmx handler only fakes an interrupt stack and jump to idt
      table to let real handler to handle it. Further, we will recognize the interrupt
      and only delivery the interrupt which not belong to current vcpu through idt table.
      The interrupt which belonged to current vcpu will be handled inside vmx handler.
      This will reduce the interrupt handle cost of KVM.
      
      Also, interrupt enable logic is changed if this feature is turnning on:
      Before this patch, hypervior call local_irq_enable() to enable it directly.
      Now IF bit is set on interrupt stack frame, and will be enabled on a return from
      interrupt handler if exterrupt interrupt exists. If no external interrupt, still
      call local_irq_enable() to enable it.
      
      Refer to Intel SDM volum 3, chapter 33.2.
      Signed-off-by: NYang Zhang <yang.z.zhang@Intel.com>
      Reviewed-by: NGleb Natapov <gleb@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      a547c6db
  2. 16 4月, 2013 4 次提交
  3. 14 4月, 2013 8 次提交
  4. 11 4月, 2013 1 次提交
    • K
      KVM: x86 emulator: Fix segment loading in VM86 · f8da94e9
      Kevin Wolf 提交于
      This fixes a regression introduced in commit 03ebebeb ("KVM: x86
      emulator: Leave segment limit and attributs alone in real mode").
      
      The mentioned commit changed the segment descriptors for both real mode
      and VM86 to only update the segment base instead of creating a
      completely new descriptor with limit 0xffff so that unreal mode keeps
      working across a segment register reload.
      
      This leads to an invalid segment descriptor in the eyes of VMX, which
      seems to be okay for real mode because KVM will fix it up before the
      next VM entry or emulate the state, but it doesn't do this if the guest
      is in VM86, so we end up with:
      
        KVM: entry failed, hardware error 0x80000021
      
      Fix this by effectively reverting commit 03ebebeb for VM86 and leaving
      it only in place for real mode, which is where it's really needed.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      f8da94e9
  5. 08 4月, 2013 5 次提交
  6. 07 4月, 2013 3 次提交
  7. 02 4月, 2013 1 次提交
    • P
      pmu: prepare for migration support · afd80d85
      Paolo Bonzini 提交于
      In order to migrate the PMU state correctly, we need to restore the
      values of MSR_CORE_PERF_GLOBAL_STATUS (a read-only register) and
      MSR_CORE_PERF_GLOBAL_OVF_CTRL (which has side effects when written).
      We also need to write the full 40-bit value of the performance counter,
      which would only be possible with a v3 architectural PMU's full-width
      counter MSRs.
      
      To distinguish host-initiated writes from the guest's, pass the
      full struct msr_data to kvm_pmu_set_msr.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NGleb Natapov <gleb@redhat.com>
      afd80d85
  8. 22 3月, 2013 2 次提交
    • T
      KVM: MMU: Rename kvm_mmu_free_some_pages() to make_mmu_pages_available() · 81f4f76b
      Takuya Yoshikawa 提交于
      The current name "kvm_mmu_free_some_pages" should be used for something
      that actually frees some shadow pages, as we expect from the name, but
      what the function is doing is to make some, KVM_MIN_FREE_MMU_PAGES,
      shadow pages available: it does nothing when there are enough.
      
      This patch changes the name to reflect this meaning better; while doing
      this renaming, the code in the wrapper function is inlined into the main
      body since the whole function will be inlined into the only caller now.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      81f4f76b
    • T
      KVM: MMU: Move kvm_mmu_free_some_pages() into kvm_mmu_alloc_page() · 7ddca7e4
      Takuya Yoshikawa 提交于
      What this function is doing is to ensure that the number of shadow pages
      does not exceed the maximum limit stored in n_max_mmu_pages: so this is
      placed at every code path that can reach kvm_mmu_alloc_page().
      
      Although it might have some sense to spread this function in each such
      code path when it could be called before taking mmu_lock, the rule was
      changed not to do so.
      
      Taking this background into account, this patch moves it into
      kvm_mmu_alloc_page() and simplifies the code.
      
      Note: the unlikely hint in kvm_mmu_free_some_pages() guarantees that the
      overhead of this function is almost zero except when we actually need to
      allocate some shadow pages, so we do not need to care about calling it
      multiple times in one path by doing kvm_mmu_get_page() a few times.
      Signed-off-by: NTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      7ddca7e4
  9. 21 3月, 2013 1 次提交
  10. 20 3月, 2013 2 次提交
  11. 19 3月, 2013 2 次提交
  12. 18 3月, 2013 1 次提交
    • L
      perf,x86: fix wrmsr_on_cpu() warning on suspend/resume · 2a6e06b2
      Linus Torvalds 提交于
      Commit 1d9d8639 ("perf,x86: fix kernel crash with PEBS/BTS after
      suspend/resume") fixed a crash when doing PEBS performance profiling
      after resuming, but in using init_debug_store_on_cpu() to restore the
      DS_AREA mtrr it also resulted in a new WARN_ON() triggering.
      
      init_debug_store_on_cpu() uses "wrmsr_on_cpu()", which in turn uses CPU
      cross-calls to do the MSR update.  Which is not really valid at the
      early resume stage, and the warning is quite reasonable.  Now, it all
      happens to _work_, for the simple reason that smp_call_function_single()
      ends up just doing the call directly on the CPU when the CPU number
      matches, but we really should just do the wrmsr() directly instead.
      
      This duplicates the wrmsr() logic, but hopefully we can just remove the
      wrmsr_on_cpu() version eventually.
      Reported-and-tested-by: NParag Warudkar <parag.lkml@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a6e06b2
  13. 16 3月, 2013 1 次提交
  14. 14 3月, 2013 4 次提交
  15. 13 3月, 2013 4 次提交