1. 17 2月, 2017 6 次提交
  2. 25 12月, 2016 2 次提交
  3. 15 12月, 2016 1 次提交
    • L
      mm: unexport __get_user_pages_unlocked() · 8b7457ef
      Lorenzo Stoakes 提交于
      Unexport the low-level __get_user_pages_unlocked() function and replaces
      invocations with calls to more appropriate higher-level functions.
      
      In hva_to_pfn_slow() we are able to replace __get_user_pages_unlocked()
      with get_user_pages_unlocked() since we can now pass gup_flags.
      
      In async_pf_execute() and process_vm_rw_single_vec() we need to pass
      different tsk, mm arguments so get_user_pages_remote() is the sane
      replacement in these cases (having added manual acquisition and release
      of mmap_sem.)
      
      Additionally get_user_pages_remote() reintroduces use of the FOLL_TOUCH
      flag.  However, this flag was originally silently dropped by commit
      1e987790 ("mm/gup: Introduce get_user_pages_remote()"), so this
      appears to have been unintentional and reintroducing it is therefore not
      an issue.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Link: http://lkml.kernel.org/r/20161027095141.2569-3-lstoakes@gmail.comSigned-off-by: NLorenzo Stoakes <lstoakes@gmail.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krcmar <rkrcmar@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8b7457ef
  4. 01 12月, 2016 1 次提交
  5. 28 11月, 2016 1 次提交
    • S
      KVM: Export kvm module parameter variables · ec76d819
      Suraj Jitindar Singh 提交于
      The kvm module has the parameters halt_poll_ns, halt_poll_ns_grow, and
      halt_poll_ns_shrink. Halt polling was recently added to the powerpc kvm-hv
      module and these parameters were essentially duplicated for that. There is
      no benefit to this duplication and it can lead to confusion when trying to
      tune halt polling.
      
      Thus move the definition of these variables to kvm_host.h and export them.
      This will allow the kvm-hv module to use the same module parameters by
      accessing these variables, which will be implemented in the next patch,
      meaning that they will no longer be duplicated.
      Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      ec76d819
  6. 22 11月, 2016 1 次提交
  7. 03 11月, 2016 1 次提交
  8. 26 10月, 2016 1 次提交
    • P
      KVM: fix OOPS on flush_work · 36343f6e
      Paolo Bonzini 提交于
      The conversion done by commit 3706feac ("KVM: Remove deprecated
      create_singlethread_workqueue") is broken.  It flushes a single work
      item &irqfd->shutdown instead of all of them, and even worse if there
      is no irqfd on the list then you get a NULL pointer dereference.
      Revert the virt/kvm/eventfd.c part of that patch; to avoid the
      deprecated function, just allocate our own workqueue---it does
      not even have to be unbound---with alloc_workqueue.
      
      Fixes: 3706feacReviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      36343f6e
  9. 25 10月, 2016 1 次提交
    • L
      mm: unexport __get_user_pages() · 0d731759
      Lorenzo Stoakes 提交于
      This patch unexports the low-level __get_user_pages() function.
      
      Recent refactoring of the get_user_pages* functions allow flags to be
      passed through get_user_pages() which eliminates the need for access to
      this function from its one user, kvm.
      
      We can see that the two calls to get_user_pages() which replace
      __get_user_pages() in kvm_main.c are equivalent by examining their call
      stacks:
      
        get_user_page_nowait():
          get_user_pages(start, 1, flags, page, NULL)
          __get_user_pages_locked(current, current->mm, start, 1, page, NULL, NULL,
      			    false, flags | FOLL_TOUCH)
          __get_user_pages(current, current->mm, start, 1,
      		     flags | FOLL_TOUCH | FOLL_GET, page, NULL, NULL)
      
        check_user_page_hwpoison():
          get_user_pages(addr, 1, flags, NULL, NULL)
          __get_user_pages_locked(current, current->mm, addr, 1, NULL, NULL, NULL,
      			    false, flags | FOLL_TOUCH)
          __get_user_pages(current, current->mm, addr, 1, flags | FOLL_TOUCH, NULL,
      		     NULL, NULL)
      Signed-off-by: NLorenzo Stoakes <lstoakes@gmail.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0d731759
  10. 19 10月, 2016 1 次提交
  11. 16 9月, 2016 2 次提交
  12. 08 9月, 2016 2 次提交
    • S
      KVM: Add provisioning for ulong vm stats and u64 vcpu stats · 8a7e75d4
      Suraj Jitindar Singh 提交于
      vms and vcpus have statistics associated with them which can be viewed
      within the debugfs. Currently it is assumed within the vcpu_stat_get() and
      vm_stat_get() functions that all of these statistics are represented as
      u32s, however the next patch adds some u64 vcpu statistics.
      
      Change all vcpu statistics to u64 and modify vcpu_stat_get() accordingly.
      Since vcpu statistics are per vcpu, they will only be updated by a single
      vcpu at a time so this shouldn't present a problem on 32-bit machines
      which can't atomically increment 64-bit numbers. However vm statistics
      could potentially be updated by multiple vcpus from that vm at a time.
      To avoid the overhead of atomics make all vm statistics ulong such that
      they are 64-bit on 64-bit systems where they can be atomically incremented
      and are 32-bit on 32-bit systems which may not be able to atomically
      increment 64-bit numbers. Modify vm_stat_get() to expect ulongs.
      Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com>
      Reviewed-by: NDavid Matlack <dmatlack@google.com>
      Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      8a7e75d4
    • B
      KVM: Remove deprecated create_singlethread_workqueue · 3706feac
      Bhaktipriya Shridhar 提交于
      The workqueue "irqfd_cleanup_wq" queues a single work item
      &irqfd->shutdown and hence doesn't require ordering. It is a host-wide
      workqueue for issuing deferred shutdown requests aggregated from all
      vm* instances. It is not being used on a memory reclaim path.
      Hence, it has been converted to use system_wq.
      The work item has been flushed in kvm_irqfd_release().
      
      The workqueue "wqueue" queues a single work item &timer->expired
      and hence doesn't require ordering. Also, it is not being used on
      a memory reclaim path. Hence, it has been converted to use system_wq.
      
      System workqueues have been able to handle high level of concurrency
      for a long time now and hence it's not required to have a singlethreaded
      workqueue just to gain concurrency. Unlike a dedicated per-cpu workqueue
      created with create_singlethread_workqueue(), system_wq allows multiple
      work items to overlap executions even on the same CPU; however, a
      per-cpu workqueue doesn't have any CPU locality or global ordering
      guarantee unless the target CPU is explicitly specified and thus the
      increase of local concurrency shouldn't make any difference.
      Signed-off-by: NBhaktipriya Shridhar <bhaktipriya96@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3706feac
  13. 12 8月, 2016 2 次提交
  14. 19 7月, 2016 1 次提交
  15. 15 7月, 2016 5 次提交
  16. 05 7月, 2016 2 次提交
  17. 16 6月, 2016 3 次提交
    • X
      kvm: Fix irq route entries exceeding KVM_MAX_IRQ_ROUTES · caf1ff26
      Xiubo Li 提交于
      These days, we experienced one guest crash with 8 cores and 3 disks,
      with qemu error logs as bellow:
      
      qemu-system-x86_64: /build/qemu-2.0.0/kvm-all.c:984:
      kvm_irqchip_commit_routes: Assertion `ret == 0' failed.
      
      And then we found one patch(bdf026317d) in qemu tree, which said
      could fix this bug.
      
      Execute the following script will reproduce the BUG quickly:
      
      irq_affinity.sh
      ========================================================================
      
      vda_irq_num=25
      vdb_irq_num=27
      while [ 1 ]
      do
          for irq in {1,2,4,8,10,20,40,80}
              do
                  echo $irq > /proc/irq/$vda_irq_num/smp_affinity
                  echo $irq > /proc/irq/$vdb_irq_num/smp_affinity
                  dd if=/dev/vda of=/dev/zero bs=4K count=100 iflag=direct
                  dd if=/dev/vdb of=/dev/zero bs=4K count=100 iflag=direct
              done
      done
      ========================================================================
      
      The following qemu log is added in the qemu code and is displayed when
      this bug reproduced:
      
      kvm_irqchip_commit_routes: max gsi: 1008, nr_allocated_irq_routes: 1024,
      irq_routes->nr: 1024, gsi_count: 1024.
      
      That's to say when irq_routes->nr == 1024, there are 1024 routing entries,
      but in the kernel code when routes->nr >= 1024, will just return -EINVAL;
      
      The nr is the number of the routing entries which is in of
      [1 ~ KVM_MAX_IRQ_ROUTES], not the index in [0 ~ KVM_MAX_IRQ_ROUTES - 1].
      
      This patch fix the BUG above.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NXiubo Li <lixiubo@cmss.chinamobile.com>
      Signed-off-by: NWei Tang <tangwei@cmss.chinamobile.com>
      Signed-off-by: NZhang Zhuoyu <zhangzhuoyu@cmss.chinamobile.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      caf1ff26
    • P
      KVM: remove kvm_vcpu_compatible · 557abc40
      Paolo Bonzini 提交于
      The new created_vcpus field makes it possible to avoid the race between
      irqchip and VCPU creation in a much nicer way; just check under kvm->lock
      whether a VCPU has already been created.
      
      We can then remove KVM_APIC_ARCHITECTURE too, because at this point the
      symbol is only governing the default definition of kvm_vcpu_compatible.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      557abc40
    • P
      KVM: introduce kvm->created_vcpus · 6c7caebc
      Paolo Bonzini 提交于
      The race between creating the irqchip and the first VCPU is
      currently fixed by checking the presence of an irqchip before
      updating kvm->online_vcpus, and undoing the whole VCPU creation
      if someone created the irqchip in the meanwhile.
      
      Instead, introduce a new field in struct kvm that will count VCPUs
      under a mutex, without the atomic access and memory ordering that we
      need elsewhere to protect the vcpus array.  This also plugs the race
      and is more easily applicable in all similar circumstances.
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      6c7caebc
  18. 02 6月, 2016 1 次提交
  19. 25 5月, 2016 1 次提交
    • J
      KVM: Create debugfs dir and stat files for each VM · 536a6f88
      Janosch Frank 提交于
      This patch adds a kvm debugfs subdirectory for each VM, which is named
      after its pid and file descriptor. The directories contain the same
      kind of files that are already in the kvm debugfs directory, but the
      data exported through them is now VM specific.
      
      This makes the debugfs kvm data a convenient alternative to the
      tracepoints which already have per VM data. The debugfs data is easy
      to read and low overhead.
      
      CC: Dan Carpenter <dan.carpenter@oracle.com> [includes fixes by Dan Carpenter]
      Signed-off-by: NJanosch Frank <frankja@linux.vnet.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      536a6f88
  20. 19 5月, 2016 2 次提交
  21. 13 5月, 2016 1 次提交
    • C
      KVM: halt_polling: provide a way to qualify wakeups during poll · 3491caf2
      Christian Borntraeger 提交于
      Some wakeups should not be considered a sucessful poll. For example on
      s390 I/O interrupts are usually floating, which means that _ALL_ CPUs
      would be considered runnable - letting all vCPUs poll all the time for
      transactional like workload, even if one vCPU would be enough.
      This can result in huge CPU usage for large guests.
      This patch lets architectures provide a way to qualify wakeups if they
      should be considered a good/bad wakeups in regard to polls.
      
      For s390 the implementation will fence of halt polling for anything but
      known good, single vCPU events. The s390 implementation for floating
      interrupts does a wakeup for one vCPU, but the interrupt will be delivered
      by whatever CPU checks first for a pending interrupt. We prefer the
      woken up CPU by marking the poll of this CPU as "good" poll.
      This code will also mark several other wakeup reasons like IPI or
      expired timers as "good". This will of course also mark some events as
      not sucessful. As  KVM on z runs always as a 2nd level hypervisor,
      we prefer to not poll, unless we are really sure, though.
      
      This patch successfully limits the CPU usage for cases like uperf 1byte
      transactional ping pong workload or wakeup heavy workload like OLTP
      while still providing a proper speedup.
      
      This also introduced a new vcpu stat "halt_poll_no_tuning" that marks
      wakeups that are considered not good for polling.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: Radim Krčmář <rkrcmar@redhat.com> (for an earlier version)
      Cc: David Matlack <dmatlack@google.com>
      Cc: Wanpeng Li <kernellwp@gmail.com>
      [Rename config symbol. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3491caf2
  22. 12 5月, 2016 1 次提交
    • G
      kvm: introduce KVM_MAX_VCPU_ID · 0b1b1dfd
      Greg Kurz 提交于
      The KVM_MAX_VCPUS define provides the maximum number of vCPUs per guest, and
      also the upper limit for vCPU ids. This is okay for all archs except PowerPC
      which can have higher ids, depending on the cpu/core/thread topology. In the
      worst case (single threaded guest, host with 8 threads per core), it limits
      the maximum number of vCPUS to KVM_MAX_VCPUS / 8.
      
      This patch separates the vCPU numbering from the total number of vCPUs, with
      the introduction of KVM_MAX_VCPU_ID, as the maximal valid value for vCPU ids
      plus one.
      
      The corresponding KVM_CAP_MAX_VCPU_ID allows userspace to validate vCPU ids
      before passing them to KVM_CREATE_VCPU.
      
      This patch only implements KVM_MAX_VCPU_ID with a specific value for PowerPC.
      Other archs continue to return KVM_MAX_VCPUS instead.
      Suggested-by: NRadim Krcmar <rkrcmar@redhat.com>
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0b1b1dfd
  23. 22 3月, 2016 1 次提交