1. 14 2月, 2011 4 次提交
    • S
      x86: Scale up the number of TLB invalidate vectors with NR_CPUs, up to 32 · 70e4a369
      Shaohua Li 提交于
      Make the maxium TLB invalidate vectors depend on NR_CPUS linearly,
      with a maximum of 32 vectors.
      
      We currently only have 8 vectors for TLB invalidation and that is clearly
      inadequate. If we have a lot of CPUs, the CPUs need share the 8 vectors and
      tlbstate_lock is used to protect them. flush_tlb_page() is
      heavily used in page reclaim, which will cause a lot of lock
      contention for tlbstate_lock.
      
      Andi Kleen suggested increasing the vectors number to 32, which should be
      good for current typical systems to reduce the tlbstate_lock contention.
      
      My test system has 4 sockets and 64G memory, and 64 CPUs. My
      workload creates 64 processes. Each process mmap reads a big
      empty sparse file. The total size of the files are 2*total_mem,
      so this will cause a lot of page reclaim.
      
      Below is the result I get from perf call-graph profiling:
      
       without the patch:
       ------------------
      
          24.25%           usemem  [kernel]                                   [k] _raw_spin_lock
                           |
                           --- _raw_spin_lock
                              |
                              |--42.15%-- native_flush_tlb_others
      
       with the patch:
       ------------------
      
          14.96%           usemem  [kernel]                                   [k] _raw_spin_lock
                           |
                           --- _raw_spin_lock
                              |--13.89%-- native_flush_tlb_others
      
      So this heavily reduces the tlbstate_lock contention.
      Suggested-by: NAndi Kleen <andi@firstfloor.org>
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1295232727.1949.709.camel@sli10-conroe>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      70e4a369
    • S
      x86: Allocate 32 tlb_invalidate_interrupt handler stubs · 3a09fb45
      Shaohua Li 提交于
      Add up to 32 invalidate_interrupt handlers. How many handlers are
      added depends on NUM_INVALIDATE_TLB_VECTORS. So if
      NUM_INVALIDATE_TLB_VECTORS is smaller than 32, we reduce code
      size.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      LKML-Reference: <1295232725.1949.708.camel@sli10-conroe>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3a09fb45
    • S
      x86: Cleanup vector usage · 60f6e65d
      Shaohua Li 提交于
      Cleanup the vector usage and make them continuous if possible.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      LKML-Reference: <1295232722.1949.707.camel@sli10-conroe>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60f6e65d
    • D
      klist: Fix object alignment on 64-bit. · 795abaf1
      David Miller 提交于
      Commit c0e69a5b ("klist.c: bit 0 in pointer can't be used as flag")
      intended to make sure that all klist objects were at least pointer size
      aligned, but used the constant "4" which only works on 32-bit.
      
      Use "sizeof(void *)" which is correct in all cases.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NJesper Nilsson <jesper.nilsson@axis.com>
      Cc: stable <stable@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      795abaf1
  2. 13 2月, 2011 8 次提交
  3. 12 2月, 2011 25 次提交
  4. 11 2月, 2011 3 次提交