1. 09 7月, 2012 20 次提交
  2. 04 7月, 2012 1 次提交
  3. 25 6月, 2012 5 次提交
  4. 19 6月, 2012 1 次提交
  5. 14 6月, 2012 1 次提交
  6. 12 6月, 2012 1 次提交
  7. 06 6月, 2012 2 次提交
    • M
      KVM: disable uninitialized var warning · 79f702a6
      Michael S. Tsirkin 提交于
      I see this in 3.5-rc1:
      
      arch/x86/kvm/mmu.c: In function ‘kvm_test_age_rmapp’:
      arch/x86/kvm/mmu.c:1271: warning: ‘iter.desc’ may be used uninitialized in this function
      
      The line in question was introduced by commit
      1e3f42f0
      
       static int kvm_test_age_rmapp(struct kvm *kvm, unsigned long *rmapp,
                                    unsigned long data)
       {
      -       u64 *spte;
      +       u64 *sptep;
      +       struct rmap_iterator iter;   <- line 1271
              int young = 0;
      
              /*
      
      The reason I think is that the compiler assumes that
      the rmap value could be 0, so
      
      static u64 *rmap_get_first(unsigned long rmap, struct rmap_iterator
      *iter)
      {
              if (!rmap)
                      return NULL;
      
              if (!(rmap & 1)) {
                      iter->desc = NULL;
                      return (u64 *)rmap;
              }
      
              iter->desc = (struct pte_list_desc *)(rmap & ~1ul);
              iter->pos = 0;
              return iter->desc->sptes[iter->pos];
      }
      
      will not initialize iter.desc, but the compiler isn't
      smart enough to see that
      
              for (sptep = rmap_get_first(*rmapp, &iter); sptep;
                   sptep = rmap_get_next(&iter)) {
      
      will immediately exit in this case.
      I checked by adding
              if (!*rmapp)
                      goto out;
      on top which is clearly equivalent but disables the warning.
      
      This patch uses uninitialized_var to disable the warning without
      increasing code size.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      79f702a6
    • C
      KVM: Cleanup the kvm_print functions and introduce pr_XX wrappers · a737f256
      Christoffer Dall 提交于
      Introduces a couple of print functions, which are essentially wrappers
      around standard printk functions, with a KVM: prefix.
      
      Functions introduced or modified are:
       - kvm_err(fmt, ...)
       - kvm_info(fmt, ...)
       - kvm_debug(fmt, ...)
       - kvm_pr_unimpl(fmt, ...)
       - pr_unimpl(vcpu, fmt, ...) -> vcpu_unimpl(vcpu, fmt, ...)
      Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      a737f256
  8. 05 6月, 2012 6 次提交
  9. 28 5月, 2012 1 次提交
  10. 17 5月, 2012 2 次提交
    • A
      KVM: Fix mmu_reload() clash with nested vmx event injection · d8368af8
      Avi Kivity 提交于
      Currently the inject_pending_event() call during guest entry happens after
      kvm_mmu_reload().  This is for historical reasons - we used to
      inject_pending_event() in atomic context, while kvm_mmu_reload() needs task
      context.
      
      A problem is that nested vmx can cause the mmu context to be reset, if event
      injection is intercepted and causes a #VMEXIT instead (the #VMEXIT resets
      CR0/CR3/CR4).  If this happens, we end up with invalid root_hpa, and since
      kvm_mmu_reload() has already run, no one will fix it and we end up entering
      the guest this way.
      
      Fix by reordering event injection to be before kvm_mmu_reload().  Use
      ->cancel_injection() to undo if kvm_mmu_reload() fails.
      
      https://bugzilla.kernel.org/show_bug.cgi?id=42980Reported-by: NLuke-Jr <luke-jr+linuxbugs@utopios.org>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      d8368af8
    • A
      KVM: MMU: Don't use RCU for lockless shadow walking · c142786c
      Avi Kivity 提交于
      Using RCU for lockless shadow walking can increase the amount of memory
      in use by the system, since RCU grace periods are unpredictable.  We also
      have an unconditional write to a shared variable (reader_counter), which
      isn't good for scaling.
      
      Replace that with a scheme similar to x86's get_user_pages_fast(): disable
      interrupts during lockless shadow walk to force the freer
      (kvm_mmu_commit_zap_page()) to wait for the TLB flush IPI to find the
      processor with interrupts enabled.
      
      We also add a new vcpu->mode, READING_SHADOW_PAGE_TABLES, to prevent
      kvm_flush_remote_tlbs() from avoiding the IPI.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      c142786c