1. 25 4月, 2017 1 次提交
  2. 02 3月, 2017 1 次提交
  3. 17 2月, 2017 2 次提交
    • P
      KVM: race-free exit from KVM_RUN without POSIX signals · 460df4c1
      Paolo Bonzini 提交于
      The purpose of the KVM_SET_SIGNAL_MASK API is to let userspace "kick"
      a VCPU out of KVM_RUN through a POSIX signal.  A signal is attached
      to a dummy signal handler; by blocking the signal outside KVM_RUN and
      unblocking it inside, this possible race is closed:
      
                VCPU thread                     service thread
         --------------------------------------------------------------
              check flag
                                                set flag
                                                raise signal
              (signal handler does nothing)
              KVM_RUN
      
      However, one issue with KVM_SET_SIGNAL_MASK is that it has to take
      tsk->sighand->siglock on every KVM_RUN.  This lock is often on a
      remote NUMA node, because it is on the node of a thread's creator.
      Taking this lock can be very expensive if there are many userspace
      exits (as is the case for SMP Windows VMs without Hyper-V reference
      time counter).
      
      As an alternative, we can put the flag directly in kvm_run so that
      KVM can see it:
      
                VCPU thread                     service thread
         --------------------------------------------------------------
                                                raise signal
              signal handler
                set run->immediate_exit
              KVM_RUN
                check run->immediate_exit
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Reviewed-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      460df4c1
    • P
      s390: Audit and remove any remaining unnecessary uses of module.h · d3217967
      Paul Gortmaker 提交于
      Historically a lot of these existed because we did not have
      a distinction between what was modular code and what was providing
      support to modules via EXPORT_SYMBOL and friends.  That changed
      when we forked out support for the latter into the export.h file.
      
      This means we should be able to reduce the usage of module.h
      in code that is obj-y Makefile or bool Kconfig.  The advantage
      in doing so is that module.h itself sources about 15 other headers;
      adding significantly to what we feed cpp, and it can obscure what
      headers we are effectively using.
      
      Since module.h was the source for init.h (for __init) and for
      export.h (for EXPORT_SYMBOL) we consider each change instance
      for the presence of either and replace as needed.  An instance
      where module_param was used without moduleparam.h was also fixed,
      as well as implicit use of ptrace.h and string.h headers.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      d3217967
  4. 06 2月, 2017 1 次提交
  5. 30 1月, 2017 6 次提交
  6. 20 1月, 2017 1 次提交
    • C
      KVM: s390: do not expose random data via facility bitmap · 04478197
      Christian Borntraeger 提交于
      kvm_s390_get_machine() populates the facility bitmap by copying bytes
      from the host results that are stored in a 256 byte array in the prefix
      page. The KVM code does use the size of the target buffer (2k), thus
      copying and exposing unrelated kernel memory (mostly machine check
      related logout data).
      
      Let's use the size of the source buffer instead.  This is ok, as the
      target buffer will always be greater or equal than the source buffer as
      the KVM internal buffers (and thus S390_ARCH_FAC_LIST_SIZE_BYTE) cover
      the maximum possible size that is allowed by STFLE, which is 256
      doublewords. All structures are zero allocated so we can leave bytes
      256-2047 unchanged.
      
      Add a similar fix for kvm_arch_init_vm().
      Reported-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      [found with smatch]
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      CC: stable@vger.kernel.org
      Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      04478197
  7. 23 11月, 2016 2 次提交
    • C
      KVM: s390: handle floating point registers in the run ioctl not in vcpu_put/load · e1788bb9
      Christian Borntraeger 提交于
      Right now we switch the host fprs/vrs in kvm_arch_vcpu_load and switch
      back in kvm_arch_vcpu_put. This process is already optimized
      since commit 9977e886 ("s390/kernel: lazy restore fpu registers")
      avoiding double save/restores on schedule. We still reload the pointers
      and test the guest fpc on each context switch, though.
      
      We can minimize the cost of vcpu_load/put by doing the test in the
      VCPU_RUN ioctl itself. As most VCPU threads almost never exit to
      userspace in the common fast path, this allows to avoid this overhead
      for the common case (eventfd driven I/O, all exits including sleep
      handled in the kernel) - making kvm_arch_vcpu_load/put basically
      disappear in perf top.
      
      Also adapt the fpu get/set ioctls.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      e1788bb9
    • C
      KVM: s390: handle access registers in the run ioctl not in vcpu_put/load · 31d8b8d4
      Christian Borntraeger 提交于
      Right now we save the host access registers in kvm_arch_vcpu_load
      and load them in kvm_arch_vcpu_put. Vice versa for the guest access
      registers. On schedule this means, that we load/save access registers
      multiple times.
      
      e.g. VCPU_RUN with just one reschedule and then return does
      
      [from user space via VCPU_RUN]
      - save the host registers in kvm_arch_vcpu_load (via ioctl)
      - load the guest registers in kvm_arch_vcpu_load (via ioctl)
      - do guest stuff
      - decide to schedule/sleep
      - save the guest registers in kvm_arch_vcpu_put (via sched)
      - load the host registers in kvm_arch_vcpu_put (via sched)
      - save the host registers in switch_to (via sched)
      - schedule
      - return
      - load the host registers in switch_to (via sched)
      - save the host registers in kvm_arch_vcpu_load (via sched)
      - load the guest registers in kvm_arch_vcpu_load (via sched)
      - do guest stuff
      - decide to go to userspace
      - save the guest registers in kvm_arch_vcpu_put (via ioctl)
      - load the host registers in kvm_arch_vcpu_put (via ioctl)
      [back to user space]
      
      As the kernel does not use access registers, we can avoid
      this reloading and simply piggy back on switch_to (let it save
      the guest values instead of host values in thread.acrs) by
      moving the host/guest switch into the VCPU_RUN ioctl function.
      We now do
      
      [from user space via VCPU_RUN]
      - save the host registers in kvm_arch_vcpu_ioctl_run
      - load the guest registers in kvm_arch_vcpu_ioctl_run
      - do guest stuff
      - decide to schedule/sleep
      - save the guest registers in switch_to
      - schedule
      - return
      - load the guest registers in switch_to (via sched)
      - do guest stuff
      - decide to go to userspace
      - save the guest registers in kvm_arch_vcpu_ioctl_run
      - load the host registers in kvm_arch_vcpu_ioctl_run
      
      This seems to save about 10% of the vcpu_put/load functions
      according to perf.
      
      As vcpu_load no longer switches the acrs, We can also loading
      the acrs in kvm_arch_vcpu_ioctl_set_sregs.
      Suggested-by: NFan Zhang <zhangfan@linux.vnet.ibm.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      31d8b8d4
  8. 16 9月, 2016 1 次提交
  9. 08 9月, 2016 3 次提交
  10. 29 8月, 2016 1 次提交
  11. 26 8月, 2016 1 次提交
  12. 25 8月, 2016 1 次提交
  13. 12 8月, 2016 2 次提交
  14. 18 7月, 2016 1 次提交
  15. 01 7月, 2016 1 次提交
  16. 21 6月, 2016 11 次提交
  17. 20 6月, 2016 4 次提交