1. 15 3月, 2011 1 次提交
  2. 14 2月, 2011 1 次提交
  3. 18 11月, 2010 1 次提交
    • Y
      x86: Use online node real index in calulate_tbl_offset() · 9223081f
      Yinghai Lu 提交于
      Found a NUMA system that doesn't have RAM installed at the first
      socket which hangs while executing init scripts.
      
      bisected it to:
      
       | commit 93296720
       | Author: Shaohua Li <shaohua.li@intel.com>
       | Date:   Wed Oct 20 11:07:03 2010 +0800
       |
       |     x86: Spread tlb flush vector between nodes
      
      It turns out when first socket is not online it could have cpus on
      node1 tlb_offset set to bigger than NUM_INVALIDATE_TLB_VECTORS.
      
      That could affect systems like 4 sockets, but socket 2 doesn't
      have installed, sockets 3 will get too big tlb_offset.
      
      Need to use real online node idx.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NShaohua Li <shaohua.li@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <4CDEDE59.40603@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9223081f
  4. 01 11月, 2010 1 次提交
    • R
      x86, mm: Fix section mismatch in tlb.c · cf38d0ba
      Rakib Mullick 提交于
      Mark tlb_cpuhp_notify as __cpuinit. It's basically a callback
      function, which is called from __cpuinit init_smp_flash(). So -
      it's safe.
      
      We were warned by the following warning:
      
       WARNING: arch/x86/mm/built-in.o(.text+0x356d): Section mismatch
       in reference from the function tlb_cpuhp_notify() to the
       function .cpuinit.text:calculate_tlb_offset()
       The function tlb_cpuhp_notify() references
       the function __cpuinit calculate_tlb_offset().
       This is often because tlb_cpuhp_notify lacks a __cpuinit
       annotation or the annotation of calculate_tlb_offset is wrong.
      Signed-off-by: NRakib Mullick <rakib.mullick@gmail.com>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Shaohua Li <shaohua.li@intel.com>
      LKML-Reference: <AANLkTinWQRG=HA9uB3ad0KAqRRTinL6L_4iKgF84coph@mail.gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cf38d0ba
  5. 21 10月, 2010 1 次提交
    • S
      x86: Spread tlb flush vector between nodes · 93296720
      Shaohua Li 提交于
      Currently flush tlb vector allocation is based on below equation:
      	sender = smp_processor_id() % 8
      This isn't optimal, CPUs from different node can have the same vector, this
      causes a lot of lock contention. Instead, we can assign the same vectors to
      CPUs from the same node, while different node has different vectors. This has
      below advantages:
      a. if there is lock contention, the lock contention is between CPUs from one
      node. This should be much cheaper than the contention between nodes.
      b. completely avoid lock contention between nodes. This especially benefits
      kswapd, which is the biggest user of tlb flush, since kswapd sets its affinity
      to specific node.
      
      In my test, this could reduce > 20% CPU overhead in extreme case.The test
      machine has 4 nodes and each node has 16 CPUs. I then bind each node's kswapd
      to the first CPU of the node. I run a workload with 4 sequential mmap file
      read thread. The files are empty sparse file. This workload will trigger a
      lot of page reclaim and tlbflush. The kswapd bind is to easy trigger the
      extreme tlb flush lock contention because otherwise kswapd keeps migrating
      between CPUs of a node and I can't get stable result. Sure in real workload,
      we can't always see so big tlb flush lock contention, but it's possible.
      
      [ hpa: folded in fix from Eric Dumazet to use this_cpu_read() ]
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      LKML-Reference: <1287544023.4571.8.camel@sli10-conroe.sh.intel.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      93296720
  6. 22 7月, 2010 1 次提交
  7. 18 2月, 2010 1 次提交
  8. 19 11月, 2009 1 次提交
    • J
      x86: Eliminate redundant/contradicting cache line size config options · 350f8f56
      Jan Beulich 提交于
      Rather than having X86_L1_CACHE_BYTES and X86_L1_CACHE_SHIFT
      (with inconsistent defaults), just having the latter suffices as
      the former can be easily calculated from it.
      
      To be consistent, also change X86_INTERNODE_CACHE_BYTES to
      X86_INTERNODE_CACHE_SHIFT, and set it to 7 (128 bytes) for NUMA
      to account for last level cache line size (which here matters
      more than L1 cache line size).
      
      Finally, make sure the default value for X86_L1_CACHE_SHIFT,
      when X86_GENERIC is selected, is being seen before that for the
      individual CPU model options (other than on x86-64, where
      GENERIC_CPU is part of the choice construct, X86_GENERIC is a
      separate option on ix86).
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Acked-by: NRavikiran Thirumalai <kiran@scalex86.org>
      Acked-by: NNick Piggin <npiggin@suse.de>
      LKML-Reference: <4AFD5710020000780001F8F0@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      350f8f56
  9. 24 9月, 2009 1 次提交
  10. 22 8月, 2009 1 次提交
    • L
      x86: don't call '->send_IPI_mask()' with an empty mask · b04e6373
      Linus Torvalds 提交于
      As noted in 83d349f3 ("x86: don't send
      an IPI to the empty set of CPU's"), some APIC's will be very unhappy
      with an empty destination mask.  That commit added a WARN_ON() for that
      case, and avoided the resulting problem, but didn't fix the underlying
      reason for why those empty mask cases happened.
      
      This fixes that, by checking the result of 'cpumask_andnot()' of the
      current CPU actually has any other CPU's left in the set of CPU's to be
      sent a TLB flush, and not calling down to the IPI code if the mask is
      empty.
      
      The reason this started happening at all is that we started passing just
      the CPU mask pointers around in commit 4595f962 ("x86: change
      flush_tlb_others to take a const struct cpumask"), and when we did that,
      the cpumask was no longer thread-local.
      
      Before that commit, flush_tlb_mm() used to create it's own copy of
      'mm->cpu_vm_mask' and pass that copy down to the low-level flush
      routines after having tested that it was not empty.  But after changing
      it to just pass down the CPU mask pointer, the lower level TLB flush
      routines would now get a pointer to that 'mm->cpu_vm_mask', and that
      could still change - and become empty - after the test due to other
      CPU's having flushed their own TLB's.
      
      See
      
      	http://bugzilla.kernel.org/show_bug.cgi?id=13933
      
      for details.
      Tested-by: NThomas Björnell <thomas.bjornell@gmail.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b04e6373
  11. 18 3月, 2009 1 次提交
    • S
      x86: add x2apic_wrmsr_fence() to x2apic flush tlb paths · ce4e240c
      Suresh Siddha 提交于
      Impact: optimize APIC IPI related barriers
      
      Uncached MMIO accesses for xapic are inherently serializing and hence
      we don't need explicit barriers for xapic IPI paths.
      
      x2apic MSR writes/reads don't have serializing semantics and hence need
      a serializing instruction or mfence, to make all the previous memory
      stores globally visisble before the x2apic msr write for IPI.
      
      Add x2apic_wrmsr_fence() in flush tlb path to x2apic specific paths.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: "steiner@sgi.com" <steiner@sgi.com>
      Cc: Nick Piggin <npiggin@suse.de>
      LKML-Reference: <1237313814.27006.203.camel@localhost.localdomain>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ce4e240c
  12. 18 2月, 2009 2 次提交
  13. 29 1月, 2009 2 次提交
  14. 21 1月, 2009 5 次提交
    • I
      x86, mm: move tlb.c to arch/x86/mm/ · 55f4949f
      Ingo Molnar 提交于
      Impact: cleanup
      
      Now that it's unified, move the (SMP) TLB flushing code from arch/x86/kernel/
      to arch/x86/mm/, where it belongs logically.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      55f4949f
    • T
      x86: rename tlb_64.c to tlb.c · 16c2d3f8
      Tejun Heo 提交于
      Impact: file rename
      
      tlb_64.c is now the tlb code for both 32 and 64.  Rename it to tlb.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      16c2d3f8
    • T
      x86: make x86_32 use tlb_64.c · 02cf94c3
      Tejun Heo 提交于
      Impact: less contention when issuing invalidate IPI, cleanup
      
      Make x86_32 use the same tlb code as 64bit.  The 64bit code uses
      multiple IPI vectors for tlb shootdown to reduce contention.  This
      patch makes x86_32 allocate the same 8 IPIs as x86_64 and share the
      code paths.
      
      Note that the usage of asmlinkage is inconsistent for x86_32 and 64
      and calls for further cleanup.  This has been noted with a FIXME
      comment in tlb_64.c.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      02cf94c3
    • T
      x86: prepare for tlb merge · 6dd01bed
      Tejun Heo 提交于
      Impact: clean up, ipi vector number reordering for x86_32
      
      Make the following changes to prepare for tlb merge.
      
      * reorder x86_32 ip vectors
      
      * adjust tlb_32.c and tlb_64.c such that their logics coincide exactly
      	- on spurious invalidate ipi, tlb_32 acks the irq
      	- tlb_64 now has proper memory barriers around clearing
                flush_cpumask (no change in generated code)
      
      * unexport flush_tlb_page from tlb_32.c, there's no user
      
      * use unsigned int for cpu id
      
      * drop unnecessary includes from tlb_64.c
      Signed-off-by: NTejun Heo <tj@kernel.org>
      6dd01bed
    • T
      x86: uv cleanup · bdbcdd48
      Tejun Heo 提交于
      Impact: cleanup
      
      Make the following uv related cleanups.
      
      * collect visible uv related definitions and interfaces into uv/uv.h
        and use it.  this cleans up the messy situation where on 64bit, uv
        is defined properly, on 32bit generic it's dummy and on the rest
        undefined.  after this clean up, uv is defined on 64 and dummy on
        32.
      
      * update uv_flush_tlb_others() such that it takes cpumask of
        to-be-flushed cpus as argument, instead of that minus self, and
        returns yet-to-be-flushed cpumask, instead of modifying the passed
        in parameter.  this interface change will ease dummy implementation
        of uv_flush_tlb_others() and makes uv tlb flush related stuff
        defined in tlb_uv proper.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      bdbcdd48
  15. 18 1月, 2009 1 次提交
  16. 15 1月, 2009 1 次提交
  17. 14 1月, 2009 2 次提交
  18. 12 1月, 2009 2 次提交
    • M
      SGI UV cpumask: use static temp cpumask in flush_tlb · 0e21990a
      Mike Travis 提交于
      Impact: Improve tlb flush performance for UV
      
      Calling alloc_cpumask_var a zillion times a second does affect
      performance.  Replace with static cpumask.
      
      Note: when CONFIG_X86_UV is defined, this extra PER_CPU memory
      will be optimized out for non-UV configs as is_uv_system() will
      then return a constant 0.
      Signed-off-by: NMike Travis <travis@sgi.com>
      0e21990a
    • R
      x86: change flush_tlb_others to take a const struct cpumask · 4595f962
      Rusty Russell 提交于
      Impact: reduce stack usage, use new cpumask API.
      
      This is made a little more tricky by uv_flush_tlb_others which
      actually alters its argument, for an IPI to be sent to the remaining
      cpus in the mask.
      
      I solve this by allocating a cpumask_var_t for this case and falling back
      to IPI should this fail.
      
      To eliminate temporaries in the caller, all flush_tlb_others implementations
      now do the this-cpu-elimination step themselves.
      
      Note also the curious "cpus_or(f->flush_cpumask, cpumask, f->flush_cpumask)"
      which has been there since pre-git and yet f->flush_cpumask is always zero
      at this point.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NMike Travis <travis@sgi.com>
      4595f962
  19. 17 12月, 2008 2 次提交
  20. 06 11月, 2008 1 次提交
  21. 08 7月, 2008 2 次提交
    • C
      SGI UV: TLB shootdown using broadcast assist unit, cleanups · b194b120
      Cliff Wickman 提交于
      TLB shootdown for SGI UV.
      
      v1: 6/2 original
      v2: 6/3 corrections/improvements per Ingo's review
      v3: 6/4 split atomic operations off to a separate patch (Jeremy's review)
      v4: 6/12 include <mach_apic.h> rather than <asm/mach-bigsmp/mach_apic.h>
               (fixes a !SMP build problem that Ingo found)
               fix the index on uv_table_bases[blade]
      Signed-off-by: NCliff Wickman <cpw@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b194b120
    • C
      x86, SGI UV: TLB shootdown using broadcast assist unit · 1812924b
      Cliff Wickman 提交于
      TLB shootdown for SGI UV.
      
      Depends on patch (in tip/x86/irq):
         x86-update-macros-used-by-uv-platform.patch   Jack Steiner May 29
      
      This patch provides the ability to flush TLB's in cpu's that are not on
      the local node.  The hardware mechanism for distributing the flush
      messages is the UV's "broadcast assist unit".
      
      The hook to intercept TLB shootdown requests is a 2-line change to
      native_flush_tlb_others() (arch/x86/kernel/tlb_64.c).
      
      This code has been tested on a hardware simulator. The real hardware
      is not yet available.
      
      The shootdown statistics are provided through /proc/sgi_uv/ptc_statistics.
      The use of /sys was considered, but would have required the use of
      many /sys files.  The debugfs was also considered, but these statistics
      should be available on an ongoing basis, not just for debugging.
      
      Issues to be fixed later:
      - The IRQ for the messaging interrupt is currently hardcoded as 200
        (see UV_BAU_MESSAGE).  It should be dynamically assigned in the future.
      - The use of appropriate udelay()'s is untested, as they are a problem
        in the simulator.
      Signed-off-by: NCliff Wickman <cpw@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1812924b
  22. 26 6月, 2008 1 次提交
  23. 26 4月, 2008 1 次提交
  24. 25 4月, 2008 1 次提交
  25. 17 4月, 2008 6 次提交