1. 11 9月, 2015 1 次提交
    • V
      mmu-notifier: add clear_young callback · 1d7715c6
      Vladimir Davydov 提交于
      In the scope of the idle memory tracking feature, which is introduced by
      the following patch, we need to clear the referenced/accessed bit not only
      in primary, but also in secondary ptes.  The latter is required in order
      to estimate wss of KVM VMs.  At the same time we want to avoid flushing
      tlb, because it is quite expensive and it won't really affect the final
      result.
      
      Currently, there is no function for clearing pte young bit that would meet
      our requirements, so this patch introduces one.  To achieve that we have
      to add a new mmu-notifier callback, clear_young, since there is no method
      for testing-and-clearing a secondary pte w/o flushing tlb.  The new method
      is not mandatory and currently only implemented by KVM.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Reviewed-by: NAndres Lagar-Cavilla <andreslc@google.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Cyrill Gorcunov <gorcunov@openvz.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1d7715c6
  2. 30 7月, 2015 1 次提交
  3. 29 7月, 2015 1 次提交
  4. 04 7月, 2015 1 次提交
  5. 05 6月, 2015 2 次提交
    • P
      KVM: implement multiple address spaces · f481b069
      Paolo Bonzini 提交于
      Only two ioctls have to be modified; the address space id is
      placed in the higher 16 bits of their slot id argument.
      
      As of this patch, no architecture defines more than one
      address space; x86 will be the first.
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f481b069
    • P
      KVM: add vcpu-specific functions to read/write/translate GFNs · 8e73485c
      Paolo Bonzini 提交于
      We need to hide SMRAM from guests not running in SMM.  Therefore, all
      uses of kvm_read_guest* and kvm_write_guest* must be changed to use
      different address spaces, depending on whether the VCPU is in system
      management mode.  We need to introduce a new family of functions for
      this purpose.
      
      For now, the VCPU-based functions have the same behavior as the
      existing per-VM ones, they just accept a different type for the
      first argument.  Later however they will be changed to use one of many
      "struct kvm_memslots" stored in struct kvm, through an architecture hook.
      VM-based functions will unconditionally use the first memslots pointer.
      
      Whenever possible, this patch introduces slot-based functions with an
      __ prefix, with two wrappers for generic and vcpu-based actions.
      The exceptions are kvm_read_guest and kvm_write_guest, which are copied
      into the new functions kvm_vcpu_read_guest and kvm_vcpu_write_guest.
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8e73485c
  6. 28 5月, 2015 4 次提交
  7. 26 5月, 2015 4 次提交
  8. 20 5月, 2015 1 次提交
    • P
      KVM: export __gfn_to_pfn_memslot, drop gfn_to_pfn_async · 3520469d
      Paolo Bonzini 提交于
      gfn_to_pfn_async is used in just one place, and because of x86-specific
      treatment that place will need to look at the memory slot.  Hence inline
      it into try_async_pf and export __gfn_to_pfn_memslot.
      
      The patch also switches the subsequent call to gfn_to_pfn_prot to use
      __gfn_to_pfn_memslot.  This is a small optimization.  Finally, remove
      the now-unused async argument of __gfn_to_pfn.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3520469d
  9. 08 5月, 2015 2 次提交
  10. 21 4月, 2015 1 次提交
    • P
      KVM: PPC: Book3S HV: Create debugfs file for each guest's HPT · e23a808b
      Paul Mackerras 提交于
      This creates a debugfs directory for each HV guest (assuming debugfs
      is enabled in the kernel config), and within that directory, a file
      by which the contents of the guest's HPT (hashed page table) can be
      read.  The directory is named vmnnnn, where nnnn is the PID of the
      process that created the guest.  The file is named "htab".  This is
      intended to help in debugging problems in the host's management
      of guest memory.
      
      The contents of the file consist of a series of lines like this:
      
        3f48 4000d032bf003505 0000000bd7ff1196 00000003b5c71196
      
      The first field is the index of the entry in the HPT, the second and
      third are the HPT entry, so the third entry contains the real page
      number that is mapped by the entry if the entry's valid bit is set.
      The fourth field is the guest's view of the second doubleword of the
      entry, so it contains the guest physical address.  (The format of the
      second through fourth fields are described in the Power ISA and also
      in arch/powerpc/include/asm/mmu-hash64.h.)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      e23a808b
  11. 10 4月, 2015 1 次提交
  12. 08 4月, 2015 1 次提交
  13. 01 4月, 2015 1 次提交
  14. 27 3月, 2015 2 次提交
  15. 24 3月, 2015 1 次提交
    • I
      kvm: avoid page allocation failure in kvm_set_memory_region() · 74496134
      Igor Mammedov 提交于
      KVM guest can fail to startup with following trace on host:
      
      qemu-system-x86: page allocation failure: order:4, mode:0x40d0
      Call Trace:
        dump_stack+0x47/0x67
        warn_alloc_failed+0xee/0x150
        __alloc_pages_direct_compact+0x14a/0x150
        __alloc_pages_nodemask+0x776/0xb80
        alloc_kmem_pages+0x3a/0x110
        kmalloc_order+0x13/0x50
        kmemdup+0x1b/0x40
        __kvm_set_memory_region+0x24a/0x9f0 [kvm]
        kvm_set_ioapic+0x130/0x130 [kvm]
        kvm_set_memory_region+0x21/0x40 [kvm]
        kvm_vm_ioctl+0x43f/0x750 [kvm]
      
      Failure happens when attempting to allocate pages for
      'struct kvm_memslots', however it doesn't have to be
      present in physically contiguous (kmalloc-ed) address
      space, change allocation to kvm_kvzalloc() so that
      it will be vmalloc-ed when its size is more then a page.
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      74496134
  16. 19 3月, 2015 1 次提交
  17. 11 3月, 2015 1 次提交
  18. 10 3月, 2015 11 次提交
  19. 12 2月, 2015 1 次提交
  20. 09 2月, 2015 1 次提交
  21. 06 2月, 2015 1 次提交
    • P
      kvm: add halt_poll_ns module parameter · f7819512
      Paolo Bonzini 提交于
      This patch introduces a new module parameter for the KVM module; when it
      is present, KVM attempts a bit of polling on every HLT before scheduling
      itself out via kvm_vcpu_block.
      
      This parameter helps a lot for latency-bound workloads---in particular
      I tested it with O_DSYNC writes with a battery-backed disk in the host.
      In this case, writes are fast (because the data doesn't have to go all
      the way to the platters) but they cannot be merged by either the host or
      the guest.  KVM's performance here is usually around 30% of bare metal,
      or 50% if you use cache=directsync or cache=writethrough (these
      parameters avoid that the guest sends pointless flush requests, and
      at the same time they are not slow because of the battery-backed cache).
      The bad performance happens because on every halt the host CPU decides
      to halt itself too.  When the interrupt comes, the vCPU thread is then
      migrated to a new physical CPU, and in general the latency is horrible
      because the vCPU thread has to be scheduled back in.
      
      With this patch performance reaches 60-65% of bare metal and, more
      important, 99% of what you get if you use idle=poll in the guest.  This
      means that the tunable gets rid of this particular bottleneck, and more
      work can be done to improve performance in the kernel or QEMU.
      
      Of course there is some price to pay; every time an otherwise idle vCPUs
      is interrupted by an interrupt, it will poll unnecessarily and thus
      impose a little load on the host.  The above results were obtained with
      a mostly random value of the parameter (500000), and the load was around
      1.5-2.5% CPU usage on one of the host's core for each idle guest vCPU.
      
      The patch also adds a new stat, /sys/kernel/debug/kvm/halt_successful_poll,
      that can be used to tune the parameter.  It counts how many HLT
      instructions received an interrupt during the polling period; each
      successful poll avoids that Linux schedules the VCPU thread out and back
      in, and may also avoid a likely trip to C1 and back for the physical CPU.
      
      While the VM is idle, a Linux 4 VCPU VM halts around 10 times per second.
      Of these halts, almost all are failed polls.  During the benchmark,
      instead, basically all halts end within the polling period, except a more
      or less constant stream of 50 per second coming from vCPUs that are not
      running the benchmark.  The wasted time is thus very low.  Things may
      be slightly different for Windows VMs, which have a ~10 ms timer tick.
      
      The effect is also visible on Marcelo's recently-introduced latency
      test for the TSC deadline timer.  Though of course a non-RT kernel has
      awful latency bounds, the latency of the timer is around 8000-10000 clock
      cycles compared to 20000-120000 without setting halt_poll_ns.  For the TSC
      deadline timer, thus, the effect is both a smaller average latency and
      a smaller variance.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f7819512