1. 19 5月, 2015 19 次提交
    • I
      x86/fpu: Move asm/xcr.h to asm/fpu/internal.h · befc61ad
      Ingo Molnar 提交于
      Now that all FPU internals using drivers are converted to public APIs,
      move xcr.h's definitions into fpu/internal.h and remove xcr.h.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      befc61ad
    • I
      x86/fpu: Simplify fpstate_init_curr() usage · c4d72e2d
      Ingo Molnar 提交于
      Now that fpstate_init_curr() is not doing implicit allocations
      anymore, almost all uses of it involve a very simple pattern:
      
      	if (!fpu->fpstate_active)
      		fpstate_init_curr(fpu);
      
      which is basically activating the FPU fpstate if it was not active
      before.
      
      So propagate the check into the function itself, and rename the
      function according to its new purpose:
      
      	fpu__activate_curr(fpu);
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c4d72e2d
    • I
      x86/fpu, kvm: Simplify fx_init() · 0ee6a517
      Ingo Molnar 提交于
      Now that fpstate_init() cannot fail the error return of fx_init()
      has lost its purpose. Eliminate the error return and propagate this
      change to all callers.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      0ee6a517
    • I
      x86/fpu: Rename fpstate_alloc_init() to fpstate_init_curr() · e62bb3d8
      Ingo Molnar 提交于
      Now that there are no FPU context allocations, rename fpstate_alloc_init()
      to fpstate_init_curr(), to signal that it initializes the fpstate and
      marks it active, for the current task.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e62bb3d8
    • I
      x86/fpu: Remove failure return from fpstate_alloc_init() · 91d93d0e
      Ingo Molnar 提交于
      Remove the failure code and propagate this down to callers.
      
      Note that this function still has an 'init' aspect, which must be
      called.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      91d93d0e
    • I
      x86/fpu: Remove failure paths from fpstate-alloc low level functions · c4d6ee6e
      Ingo Molnar 提交于
      Now that we always allocate the FPU context as part of task_struct there's
      no need for separate allocations - remove them and their primary failure
      handling code.
      
      ( Note that there's still secondary error codes that have become superfluous,
        those will be removed in separate patches. )
      
      Move the somewhat misplaced setup_xstate_comp() call to the core.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c4d6ee6e
    • I
      x86/fpu: Simplify FPU handling by embedding the fpstate in task_struct (again) · 7366ed77
      Ingo Molnar 提交于
      So 6 years ago we made the FPU fpstate dynamically allocated:
      
        aa283f49 ("x86, fpu: lazy allocation of FPU area - v5")
        61c4628b ("x86, fpu: split FPU state from task struct - v5")
      
      In hindsight this was a mistake:
      
         - it complicated context allocation failure handling, such as:
      
      		/* kthread execs. TODO: cleanup this horror. */
      		if (WARN_ON(fpstate_alloc_init(fpu)))
      			force_sig(SIGKILL, tsk);
      
         - it caused us to enable irqs in fpu__restore():
      
                      local_irq_enable();
                      /*
                       * does a slab alloc which can sleep
                       */
                      if (fpstate_alloc_init(fpu)) {
                              /*
                               * ran out of memory!
                               */
                              do_group_exit(SIGKILL);
                              return;
                      }
                      local_irq_disable();
      
         - it (slightly) slowed down task creation/destruction by adding
           slab allocation/free pattens.
      
         - it made access to context contents (slightly) slower by adding
           one more pointer dereference.
      
      The motivation for the dynamic allocation was two-fold:
      
         - reduce memory consumption by non-FPU tasks
      
         - allocate and handle only the necessary amount of context for
           various XSAVE processors that have varying hardware frame
           sizes.
      
      These days, with glibc using SSE memcpy by default and GCC optimizing
      for SSE/AVX by default, the scope of FPU using apps on an x86 system is
      much larger than it was 6 years ago.
      
      For example on a freshly installed Fedora 21 desktop system, with a
      recent kernel, all non-kthread tasks have used the FPU shortly after
      bootup.
      
      Also, even modern embedded x86 CPUs try to support the latest vector
      instruction set - so they'll too often use the larger xstate frame
      sizes.
      
      So remove the dynamic allocation complication by embedding the FPU
      fpstate in task_struct again. This should make the FPU a lot more
      accessible to all sorts of atomic contexts.
      
      We could still optimize for the xstate frame size in the future,
      by moving the state structure to the last element of task_struct,
      and allocating only a part of that.
      
      This change is kept minimal by still keeping the ctx_alloc()/free()
      routines (that now do nothing substantial) - we'll remove them in
      the following patches.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      7366ed77
    • I
      x86/fpu: Rename fpu_save_init() to copy_fpregs_to_fpstate() · 4f836347
      Ingo Molnar 提交于
      So fpu_save_init() is a historic name that got its name when the only
      way the FPU state was FNSAVE, which cleared (well, destroyed) the FPU
      state after saving it.
      
      Nowadays the name is misleading, because ever since the introduction of
      FXSAVE (and more modern FPU saving instructions) the 'we need to reload
      the FPU state' part is only true if there's a pending FPU exception [*],
      which is almost never the case.
      
      So rename it to copy_fpregs_to_fpstate() to make it clear what's
      happening. Also add a few comments about why we cannot keep registers
      in certain cases.
      
      Also clean up the control flow a bit, to make it more apparent when
      we are dropping/keeping FP registers, and to optimize the common
      case (of keeping fpregs) some more.
      
      [*] Probably not true anymore, modern instructions always leave the FPU
          state intact, even if exceptions are pending: because pending FP
          exceptions are posted on the next FP instruction, not asynchronously.
      
          They were truly asynchronous back in the IRQ13 case, and we had to
          synchronize with them, but that code is not working anymore: we don't
          have IRQ13 mapped in the IDT anymore.
      
          But a cleanup patch is obviously not the place to change subtle behavior.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      4f836347
    • I
      x86/fpu: Rename xsave.header::xstate_bv to 'xfeatures' · 400e4b20
      Ingo Molnar 提交于
      'xsave.header::xstate_bv' is a misnomer - what does 'bv' stand for?
      
      It probably comes from the 'XGETBV' instruction name, but I could
      not find in the Intel documentation where that abbreviation comes
      from. It could mean 'bit vector' - or something else?
      
      But how about - instead of guessing about a weird name - we named
      the field in an obvious and descriptive way that tells us exactly
      what it does?
      
      So rename it to 'xfeatures', which is a bitmask of the
      xfeatures that are fpstate_active in that context structure.
      
      Eyesore like:
      
                 fpu->state->xsave.xsave_hdr.xstate_bv |= XSTATE_FP;
      
      is now much more readable:
      
                 fpu->state->xsave.header.xfeatures |= XSTATE_FP;
      
      Which form is not just infinitely more readable, but is also
      shorter as well.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      400e4b20
    • I
      x86/fpu: Rename 'xsave_hdr' to 'header' · 3a54450b
      Ingo Molnar 提交于
      Code like:
      
                 fpu->state->xsave.xsave_hdr.xstate_bv |= XSTATE_FP;
      
      is an eyesore, because not only is the words 'xsave' and 'state'
      are repeated twice times (!), but also because of the 'hdr' and 'bv'
      abbreviations that are pretty meaningless at a first glance.
      
      Start cleaning this up by renaming 'xsave_hdr' to 'header'.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      3a54450b
    • I
      x86/fpu: Rename fpu-internal.h to fpu/internal.h · 78f7f1e5
      Ingo Molnar 提交于
      This unifies all the FPU related header files under a unified, hiearchical
      naming scheme:
      
       - asm/fpu/types.h:      FPU related data types, needed for 'struct task_struct',
                               widely included in almost all kernel code, and hence kept
                               as small as possible.
      
       - asm/fpu/api.h:        FPU related 'public' methods exported to other subsystems.
      
       - asm/fpu/internal.h:   FPU subsystem internal methods
      
       - asm/fpu/xsave.h:      XSAVE support internal methods
      
      (Also standardize the header guard in asm/fpu/internal.h.)
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      78f7f1e5
    • I
      x86/fpu: Use 'struct fpu' in fpstate_alloc_init() · db2b1d3a
      Ingo Molnar 提交于
      Migrate this function to pure 'struct fpu' usage.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      db2b1d3a
    • I
      x86/fpu: Get rid of PF_USED_MATH usage, convert it to fpu->fpstate_active · c5bedc68
      Ingo Molnar 提交于
      Introduce a simple fpu->fpstate_active flag in the fpu context data structure
      and use that instead of PF_USED_MATH in task->flags.
      
      Testing for this flag byte should be slightly more efficient than
      testing a bit in a bitmask, but the main advantage is that most
      FPU functions can now be performed on a 'struct fpu' alone, they
      don't need access to 'struct task_struct' anymore.
      
      There's a slight linecount increase, mostly due to the 'fpu' local
      variables and due to extra comments. The local variables will go away
      once we move most of the FPU methods to pure 'struct fpu' parameters.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c5bedc68
    • I
      x86/fpu: Open code PF_USED_MATH usages · 4c138410
      Ingo Molnar 提交于
      PF_USED_MATH is used directly, but also in a handful of helper inlines.
      
      To ease the elimination of PF_USED_MATH, convert all inline helpers
      to open-coded PF_USED_MATH usage.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      4c138410
    • I
      x86/fpu: Fix header file dependencies of fpu-internal.h · f89e32e0
      Ingo Molnar 提交于
      Fix a minor header file dependency bug in asm/fpu-internal.h: it
      relies on i387.h but does not include it. All users of fpu-internal.h
      included it explicitly.
      
      Also remove unnecessary includes, to reduce compilation time.
      
      This also makes it easier to use it as a standalone header file
      for FPU internals, such as an upcoming C module in arch/x86/kernel/fpu/.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f89e32e0
    • I
      x86/fpu: Rename fpu_finit() to fpstate_init() · c0ee2cf6
      Ingo Molnar 提交于
      Make it clear that we are initializing the in-memory FPU context area,
      no the FPU registers.
      
      Also move it to the fpu__*() namespace.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      c0ee2cf6
    • I
      x86/fpu: Rename fpu_free() to fpstate_free() · a7c2a833
      Ingo Molnar 提交于
      Use the fpu__*() namespace.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a7c2a833
    • I
      x86/fpu: Rename fpu_alloc() to fpstate_alloc() · ed97b085
      Ingo Molnar 提交于
      Use the fpu__*() namespace for fpstate_alloc() as well.
      
      Also add a comment about FPU state alignment.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      ed97b085
    • I
      x86/fpu: Split an fpstate_alloc_init() function out of init_fpu() · 97185c95
      Ingo Molnar 提交于
      Most init_fpu() users don't want the register-saving aspect of the
      function, they are calling it for 'current' and when FPU registers
      are not allocated and initialized yet.
      
      Split out a simplified API that does just that (and add debug-checks
      for these conditions): fpstate_alloc_init().
      
      Use it where appropriate.
      Reviewed-by: NBorislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      97185c95
  2. 27 4月, 2015 1 次提交
    • R
      kvm: x86: fix kvmclock update protocol · 5dca0d91
      Radim Krčmář 提交于
      The kvmclock spec says that the host will increment a version field to
      an odd number, then update stuff, then increment it to an even number.
      The host is buggy and doesn't do this, and the result is observable
      when one vcpu reads another vcpu's kvmclock data.
      
      There's no good way for a guest kernel to keep its vdso from reading
      a different vcpu's kvmclock data, but we don't need to care about
      changing VCPUs as long as we read a consistent data from kvmclock.
      (VCPU can change outside of this loop too, so it doesn't matter if we
      return a value not fit for this VCPU.)
      
      Based on a patch by Radim Krčmář.
      Reviewed-by: NRadim Krčmář <rkrcmar@redhat.com>
      Acked-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5dca0d91
  3. 15 4月, 2015 1 次提交
  4. 08 4月, 2015 7 次提交
  5. 27 3月, 2015 2 次提交
  6. 18 3月, 2015 2 次提交
  7. 11 3月, 2015 2 次提交
  8. 10 3月, 2015 1 次提交
  9. 06 2月, 2015 1 次提交
    • P
      kvm: add halt_poll_ns module parameter · f7819512
      Paolo Bonzini 提交于
      This patch introduces a new module parameter for the KVM module; when it
      is present, KVM attempts a bit of polling on every HLT before scheduling
      itself out via kvm_vcpu_block.
      
      This parameter helps a lot for latency-bound workloads---in particular
      I tested it with O_DSYNC writes with a battery-backed disk in the host.
      In this case, writes are fast (because the data doesn't have to go all
      the way to the platters) but they cannot be merged by either the host or
      the guest.  KVM's performance here is usually around 30% of bare metal,
      or 50% if you use cache=directsync or cache=writethrough (these
      parameters avoid that the guest sends pointless flush requests, and
      at the same time they are not slow because of the battery-backed cache).
      The bad performance happens because on every halt the host CPU decides
      to halt itself too.  When the interrupt comes, the vCPU thread is then
      migrated to a new physical CPU, and in general the latency is horrible
      because the vCPU thread has to be scheduled back in.
      
      With this patch performance reaches 60-65% of bare metal and, more
      important, 99% of what you get if you use idle=poll in the guest.  This
      means that the tunable gets rid of this particular bottleneck, and more
      work can be done to improve performance in the kernel or QEMU.
      
      Of course there is some price to pay; every time an otherwise idle vCPUs
      is interrupted by an interrupt, it will poll unnecessarily and thus
      impose a little load on the host.  The above results were obtained with
      a mostly random value of the parameter (500000), and the load was around
      1.5-2.5% CPU usage on one of the host's core for each idle guest vCPU.
      
      The patch also adds a new stat, /sys/kernel/debug/kvm/halt_successful_poll,
      that can be used to tune the parameter.  It counts how many HLT
      instructions received an interrupt during the polling period; each
      successful poll avoids that Linux schedules the VCPU thread out and back
      in, and may also avoid a likely trip to C1 and back for the physical CPU.
      
      While the VM is idle, a Linux 4 VCPU VM halts around 10 times per second.
      Of these halts, almost all are failed polls.  During the benchmark,
      instead, basically all halts end within the polling period, except a more
      or less constant stream of 50 per second coming from vCPUs that are not
      running the benchmark.  The wasted time is thus very low.  Things may
      be slightly different for Windows VMs, which have a ~10 ms timer tick.
      
      The effect is also visible on Marcelo's recently-introduced latency
      test for the TSC deadline timer.  Though of course a non-RT kernel has
      awful latency bounds, the latency of the timer is around 8000-10000 clock
      cycles compared to 20000-120000 without setting halt_poll_ns.  For the TSC
      deadline timer, thus, the effect is both a smaller average latency and
      a smaller variance.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      f7819512
  10. 30 1月, 2015 1 次提交
  11. 29 1月, 2015 2 次提交
  12. 26 1月, 2015 1 次提交