1. 07 1月, 2016 1 次提交
  2. 16 12月, 2015 1 次提交
  3. 30 11月, 2015 4 次提交
  4. 23 10月, 2015 1 次提交
  5. 16 10月, 2015 1 次提交
  6. 25 9月, 2015 1 次提交
  7. 16 9月, 2015 1 次提交
    • P
      KVM: add halt_attempted_poll to VCPU stats · 62bea5bf
      Paolo Bonzini 提交于
      This new statistic can help diagnosing VCPUs that, for any reason,
      trigger bad behavior of halt_poll_ns autotuning.
      
      For example, say halt_poll_ns = 480000, and wakeups are spaced exactly
      like 479us, 481us, 479us, 481us. Then KVM always fails polling and wastes
      10+20+40+80+160+320+480 = 1110 microseconds out of every
      479+481+479+481+479+481+479 = 3359 microseconds. The VCPU then
      is consuming about 30% more CPU than it would use without
      polling.  This would show as an abnormally high number of
      attempted polling compared to the successful polls.
      
      Acked-by: Christian Borntraeger <borntraeger@de.ibm.com<
      Reviewed-by: NDavid Matlack <dmatlack@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      62bea5bf
  8. 29 7月, 2015 2 次提交
  9. 22 7月, 2015 1 次提交
    • H
      s390/kernel: lazy restore fpu registers · 9977e886
      Hendrik Brueckner 提交于
      Improve the save and restore behavior of FPU register contents to use the
      vector extension within the kernel.
      
      The kernel does not use floating-point or vector registers and, therefore,
      saving and restoring the FPU register contents are performed for handling
      signals or switching processes only.  To prepare for using vector
      instructions and vector registers within the kernel, enhance the save
      behavior and implement a lazy restore at return to user space from a
      system call or interrupt.
      
      To implement the lazy restore, the save_fpu_regs() sets a CPU information
      flag, CIF_FPU, to indicate that the FPU registers must be restored.
      Saving and setting CIF_FPU is performed in an atomic fashion to be
      interrupt-safe.  When the kernel wants to use the vector extension or
      wants to change the FPU register state for a task during signal handling,
      the save_fpu_regs() must be called first.  The CIF_FPU flag is also set at
      process switch.  At return to user space, the FPU state is restored.  In
      particular, the FPU state includes the floating-point or vector register
      contents, as well as, vector-enablement and floating-point control.  The
      FPU state restore and clearing CIF_FPU is also performed in an atomic
      fashion.
      
      For KVM, the restore of the FPU register state is performed when restoring
      the general-purpose guest registers before the SIE instructions is started.
      Because the path towards the SIE instruction is interruptible, the CIF_FPU
      flag must be checked again right before going into SIE.  If set, the guest
      registers must be reloaded again by re-entering the outer SIE loop.  This
      is the same behavior as if the SIE critical section is interrupted.
      Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      9977e886
  10. 26 5月, 2015 1 次提交
  11. 08 5月, 2015 2 次提交
  12. 01 4月, 2015 1 次提交
    • J
      KVM: s390: deliver floating interrupts in order of priority · 6d3da241
      Jens Freimann 提交于
      This patch makes interrupt handling compliant to the z/Architecture
      Principles of Operation with regard to interrupt priorities.
      
      Add a bitmap for pending floating interrupts. Each bit relates to a
      interrupt type and its list. A turned on bit indicates that a list
      contains items (interrupts) which need to be delivered.  When delivering
      interrupts on a cpu we can merge the existing bitmap for cpu-local
      interrupts and floating interrupts and have a single mechanism for
      delivery.
      Currently we have one list for all kinds of floating interrupts and a
      corresponding spin lock. This patch adds a separate list per
      interrupt type. An exception to this are service signal and machine check
      interrupts, as there can be only one pending interrupt at a time.
      Signed-off-by: NJens Freimann <jfrei@linux.vnet.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      6d3da241
  13. 17 3月, 2015 2 次提交
  14. 06 3月, 2015 4 次提交
  15. 04 3月, 2015 1 次提交
    • M
      KVM: s390: include guest facilities in kvm facility test · 981467c9
      Michael Mueller 提交于
      Most facility related decisions in KVM have to take into account:
      
      - the facilities offered by the underlying run container (LPAR/VM)
      - the facilities supported by the KVM code itself
      - the facilities requested by a guest VM
      
      This patch adds the KVM driver requested facilities to the test routine.
      
      It additionally renames struct s390_model_fac to kvm_s390_fac and its field
      names to be more meaningful.
      
      The semantics of the facilities stored in the KVM architecture structure
      is changed. The address arch.model.fac->list now points to the guest
      facility list and arch.model.fac->mask points to the KVM facility mask.
      
      This patch fixes the behaviour of KVM for some facilities for guests
      that ignore the guest visible facility bits, e.g. guests could use
      transactional memory intructions on hosts supporting them even if the
      chosen cpu model would not offer them.
      
      The userspace interface is not affected by this change.
      Signed-off-by: NMichael Mueller <mimu@linux.vnet.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      981467c9
  16. 09 2月, 2015 3 次提交
  17. 06 2月, 2015 1 次提交
    • P
      kvm: add halt_poll_ns module parameter · f7819512
      Paolo Bonzini 提交于
      This patch introduces a new module parameter for the KVM module; when it
      is present, KVM attempts a bit of polling on every HLT before scheduling
      itself out via kvm_vcpu_block.
      
      This parameter helps a lot for latency-bound workloads---in particular
      I tested it with O_DSYNC writes with a battery-backed disk in the host.
      In this case, writes are fast (because the data doesn't have to go all
      the way to the platters) but they cannot be merged by either the host or
      the guest.  KVM's performance here is usually around 30% of bare metal,
      or 50% if you use cache=directsync or cache=writethrough (these
      parameters avoid that the guest sends pointless flush requests, and
      at the same time they are not slow because of the battery-backed cache).
      The bad performance happens because on every halt the host CPU decides
      to halt itself too.  When the interrupt comes, the vCPU thread is then
      migrated to a new physical CPU, and in general the latency is horrible
      because the vCPU thread has to be scheduled back in.
      
      With this patch performance reaches 60-65% of bare metal and, more
      important, 99% of what you get if you use idle=poll in the guest.  This
      means that the tunable gets rid of this particular bottleneck, and more
      work can be done to improve performance in the kernel or QEMU.
      
      Of course there is some price to pay; every time an otherwise idle vCPUs
      is interrupted by an interrupt, it will poll unnecessarily and thus
      impose a little load on the host.  The above results were obtained with
      a mostly random value of the parameter (500000), and the load was around
      1.5-2.5% CPU usage on one of the host's core for each idle guest vCPU.
      
      The patch also adds a new stat, /sys/kernel/debug/kvm/halt_successful_poll,
      that can be used to tune the parameter.  It counts how many HLT
      instructions received an interrupt during the polling period; each
      successful poll avoids that Linux schedules the VCPU thread out and back
      in, and may also avoid a likely trip to C1 and back for the physical CPU.
      
      While the VM is idle, a Linux 4 VCPU VM halts around 10 times per second.
      Of these halts, almost all are failed polls.  During the benchmark,
      instead, basically all halts end within the polling period, except a more
      or less constant stream of 50 per second coming from vCPUs that are not
      running the benchmark.  The wasted time is thus very low.  Things may
      be slightly different for Windows VMs, which have a ~10 ms timer tick.
      
      The effect is also visible on Marcelo's recently-introduced latency
      test for the TSC deadline timer.  Though of course a non-RT kernel has
      awful latency bounds, the latency of the timer is around 8000-10000 clock
      cycles compared to 20000-120000 without setting halt_poll_ns.  For the TSC
      deadline timer, thus, the effect is both a smaller average latency and
      a smaller variance.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f7819512
  18. 23 1月, 2015 7 次提交
  19. 28 11月, 2014 3 次提交
  20. 28 10月, 2014 2 次提交