1. 09 1月, 2016 1 次提交
  2. 30 11月, 2015 2 次提交
  3. 26 11月, 2015 1 次提交
  4. 23 10月, 2015 1 次提交
  5. 01 10月, 2015 3 次提交
  6. 25 9月, 2015 1 次提交
  7. 16 9月, 2015 1 次提交
    • P
      KVM: add halt_attempted_poll to VCPU stats · 62bea5bf
      Paolo Bonzini 提交于
      This new statistic can help diagnosing VCPUs that, for any reason,
      trigger bad behavior of halt_poll_ns autotuning.
      
      For example, say halt_poll_ns = 480000, and wakeups are spaced exactly
      like 479us, 481us, 479us, 481us. Then KVM always fails polling and wastes
      10+20+40+80+160+320+480 = 1110 microseconds out of every
      479+481+479+481+479+481+479 = 3359 microseconds. The VCPU then
      is consuming about 30% more CPU than it would use without
      polling.  This would show as an abnormally high number of
      attempted polling compared to the successful polls.
      
      Acked-by: Christian Borntraeger <borntraeger@de.ibm.com<
      Reviewed-by: NDavid Matlack <dmatlack@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      62bea5bf
  8. 15 9月, 2015 1 次提交
  9. 14 9月, 2015 1 次提交
  10. 11 9月, 2015 1 次提交
    • V
      mmu-notifier: add clear_young callback · 1d7715c6
      Vladimir Davydov 提交于
      In the scope of the idle memory tracking feature, which is introduced by
      the following patch, we need to clear the referenced/accessed bit not only
      in primary, but also in secondary ptes.  The latter is required in order
      to estimate wss of KVM VMs.  At the same time we want to avoid flushing
      tlb, because it is quite expensive and it won't really affect the final
      result.
      
      Currently, there is no function for clearing pte young bit that would meet
      our requirements, so this patch introduces one.  To achieve that we have
      to add a new mmu-notifier callback, clear_young, since there is no method
      for testing-and-clearing a secondary pte w/o flushing tlb.  The new method
      is not mandatory and currently only implemented by KVM.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Reviewed-by: NAndres Lagar-Cavilla <andreslc@google.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1d7715c6
  11. 06 9月, 2015 3 次提交
    • W
      KVM: trace kvm_halt_poll_ns grow/shrink · 2cbd7824
      Wanpeng Li 提交于
      Tracepoint for dynamic halt_pool_ns, fired on every potential change.
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      2cbd7824
    • W
      KVM: dynamic halt-polling · aca6ff29
      Wanpeng Li 提交于
      There is a downside of always-poll since poll is still happened for idle
      vCPUs which can waste cpu usage. This patchset add the ability to adjust
      halt_poll_ns dynamically, to grow halt_poll_ns when shot halt is detected,
      and to shrink halt_poll_ns when long halt is detected.
      
      There are two new kernel parameters for changing the halt_poll_ns:
      halt_poll_ns_grow and halt_poll_ns_shrink.
      
                              no-poll      always-poll    dynamic-poll
      -----------------------------------------------------------------------
      Idle (nohz) vCPU %c0     0.15%        0.3%            0.2%
      Idle (250HZ) vCPU %c0    1.1%         4.6%~14%        1.2%
      TCP_RR latency           34us         27us            26.7us
      
      "Idle (X) vCPU %c0" is the percent of time the physical cpu spent in
      c0 over 60 seconds (each vCPU is pinned to a pCPU). (nohz) means the
      guest was tickless. (250HZ) means the guest was ticking at 250HZ.
      
      The big win is with ticking operating systems. Running the linux guest
      with nohz=off (and HZ=250), we save 3.4%~12.8% CPUs/second and get close
      to no-polling overhead levels by using the dynamic-poll. The savings
      should be even higher for higher frequency ticks.
      Suggested-by: NDavid Matlack <dmatlack@google.com>
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      [Simplify the patch. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      aca6ff29
    • W
      KVM: make halt_poll_ns per-vCPU · 19020f8a
      Wanpeng Li 提交于
      Change halt_poll_ns into per-VCPU variable, seeded from module parameter,
      to allow greater flexibility.
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      19020f8a
  12. 30 7月, 2015 1 次提交
  13. 29 7月, 2015 1 次提交
  14. 04 7月, 2015 1 次提交
  15. 05 6月, 2015 2 次提交
    • P
      KVM: implement multiple address spaces · f481b069
      Paolo Bonzini 提交于
      Only two ioctls have to be modified; the address space id is
      placed in the higher 16 bits of their slot id argument.
      
      As of this patch, no architecture defines more than one
      address space; x86 will be the first.
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f481b069
    • P
      KVM: add vcpu-specific functions to read/write/translate GFNs · 8e73485c
      Paolo Bonzini 提交于
      We need to hide SMRAM from guests not running in SMM.  Therefore, all
      uses of kvm_read_guest* and kvm_write_guest* must be changed to use
      different address spaces, depending on whether the VCPU is in system
      management mode.  We need to introduce a new family of functions for
      this purpose.
      
      For now, the VCPU-based functions have the same behavior as the
      existing per-VM ones, they just accept a different type for the
      first argument.  Later however they will be changed to use one of many
      "struct kvm_memslots" stored in struct kvm, through an architecture hook.
      VM-based functions will unconditionally use the first memslots pointer.
      
      Whenever possible, this patch introduces slot-based functions with an
      __ prefix, with two wrappers for generic and vcpu-based actions.
      The exceptions are kvm_read_guest and kvm_write_guest, which are copied
      into the new functions kvm_vcpu_read_guest and kvm_vcpu_write_guest.
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8e73485c
  16. 28 5月, 2015 4 次提交
  17. 26 5月, 2015 4 次提交
  18. 20 5月, 2015 1 次提交
    • P
      KVM: export __gfn_to_pfn_memslot, drop gfn_to_pfn_async · 3520469d
      Paolo Bonzini 提交于
      gfn_to_pfn_async is used in just one place, and because of x86-specific
      treatment that place will need to look at the memory slot.  Hence inline
      it into try_async_pf and export __gfn_to_pfn_memslot.
      
      The patch also switches the subsequent call to gfn_to_pfn_prot to use
      __gfn_to_pfn_memslot.  This is a small optimization.  Finally, remove
      the now-unused async argument of __gfn_to_pfn.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3520469d
  19. 08 5月, 2015 2 次提交
  20. 21 4月, 2015 1 次提交
    • P
      KVM: PPC: Book3S HV: Create debugfs file for each guest's HPT · e23a808b
      Paul Mackerras 提交于
      This creates a debugfs directory for each HV guest (assuming debugfs
      is enabled in the kernel config), and within that directory, a file
      by which the contents of the guest's HPT (hashed page table) can be
      read.  The directory is named vmnnnn, where nnnn is the PID of the
      process that created the guest.  The file is named "htab".  This is
      intended to help in debugging problems in the host's management
      of guest memory.
      
      The contents of the file consist of a series of lines like this:
      
        3f48 4000d032bf003505 0000000bd7ff1196 00000003b5c71196
      
      The first field is the index of the entry in the HPT, the second and
      third are the HPT entry, so the third entry contains the real page
      number that is mapped by the entry if the entry's valid bit is set.
      The fourth field is the guest's view of the second doubleword of the
      entry, so it contains the guest physical address.  (The format of the
      second through fourth fields are described in the Power ISA and also
      in arch/powerpc/include/asm/mmu-hash64.h.)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      e23a808b
  21. 10 4月, 2015 1 次提交
  22. 08 4月, 2015 1 次提交
  23. 01 4月, 2015 1 次提交
  24. 27 3月, 2015 2 次提交
  25. 24 3月, 2015 1 次提交
    • I
      kvm: avoid page allocation failure in kvm_set_memory_region() · 74496134
      Igor Mammedov 提交于
      KVM guest can fail to startup with following trace on host:
      
      qemu-system-x86: page allocation failure: order:4, mode:0x40d0
      Call Trace:
        dump_stack+0x47/0x67
        warn_alloc_failed+0xee/0x150
        __alloc_pages_direct_compact+0x14a/0x150
        __alloc_pages_nodemask+0x776/0xb80
        alloc_kmem_pages+0x3a/0x110
        kmalloc_order+0x13/0x50
        kmemdup+0x1b/0x40
        __kvm_set_memory_region+0x24a/0x9f0 [kvm]
        kvm_set_ioapic+0x130/0x130 [kvm]
        kvm_set_memory_region+0x21/0x40 [kvm]
        kvm_vm_ioctl+0x43f/0x750 [kvm]
      
      Failure happens when attempting to allocate pages for
      'struct kvm_memslots', however it doesn't have to be
      present in physically contiguous (kmalloc-ed) address
      space, change allocation to kvm_kvzalloc() so that
      it will be vmalloc-ed when its size is more then a page.
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      74496134
  26. 19 3月, 2015 1 次提交