1. 28 6月, 2012 2 次提交
    • A
      x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range · e7b52ffd
      Alex Shi 提交于
      x86 has no flush_tlb_range support in instruction level. Currently the
      flush_tlb_range just implemented by flushing all page table. That is not
      the best solution for all scenarios. In fact, if we just use 'invlpg' to
      flush few lines from TLB, we can get the performance gain from later
      remain TLB lines accessing.
      
      But the 'invlpg' instruction costs much of time. Its execution time can
      compete with cr3 rewriting, and even a bit more on SNB CPU.
      
      So, on a 512 4KB TLB entries CPU, the balance points is at:
      	(512 - X) * 100ns(assumed TLB refill cost) =
      		X(TLB flush entries) * 100ns(assumed invlpg cost)
      
      Here, X is 256, that is 1/2 of 512 entries.
      
      But with the mysterious CPU pre-fetcher and page miss handler Unit, the
      assumed TLB refill cost is far lower then 100ns in sequential access. And
      2 HT siblings in one core makes the memory access more faster if they are
      accessing the same memory. So, in the patch, I just do the change when
      the target entries is less than 1/16 of whole active tlb entries.
      Actually, I have no data support for the percentage '1/16', so any
      suggestions are welcomed.
      
      As to hugetlb, guess due to smaller page table, and smaller active TLB
      entries, I didn't see benefit via my benchmark, so no optimizing now.
      
      My micro benchmark show in ideal scenarios, the performance improves 70
      percent in reading. And in worst scenario, the reading/writing
      performance is similar with unpatched 3.4-rc4 kernel.
      
      Here is the reading data on my 2P * 4cores *HT NHM EP machine, with THP
      'always':
      
      multi thread testing, '-t' paramter is thread number:
      	       	        with patch   unpatched 3.4-rc4
      ./mprotect -t 1           14ns		24ns
      ./mprotect -t 2           13ns		22ns
      ./mprotect -t 4           12ns		19ns
      ./mprotect -t 8           14ns		16ns
      ./mprotect -t 16          28ns		26ns
      ./mprotect -t 32          54ns		51ns
      ./mprotect -t 128         200ns		199ns
      
      Single process with sequencial flushing and memory accessing:
      
      		       	with patch   unpatched 3.4-rc4
      ./mprotect		    7ns			11ns
      ./mprotect -p 4096  -l 8 -n 10240
      			    21ns		21ns
      
      [ hpa: http://lkml.kernel.org/r/1B4B44D9196EFF41AE41FDA404FC0A100BFF94@SHSMSX101.ccr.corp.intel.com
        has additional performance numbers. ]
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-3-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      e7b52ffd
    • A
      x86/tlb_info: get last level TLB entry number of CPU · e0ba94f1
      Alex Shi 提交于
      For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
      instruction TLB, second level is shared TLB for both data and instructions.
      
      For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
      and 1GB.
      
      Although each levels TLB size is important for performance tuning, but for
      genernal and rude optimizing, last level TLB entry number is suitable. And
      in fact, last level TLB always has the biggest entry number.
      
      This patch will get the biggest TLB entry number and use it in furture TLB
      optimizing.
      
      Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
      function and data will be released after system boot up.
      
      For all kinds of x86 vendor friendly, vendor specific code was moved to its
      specific files.
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      e0ba94f1
  2. 14 6月, 2012 2 次提交
    • V
      x86: Add read_mostly declaration/definition to variables from smp.h · 0816b0f0
      Vlad Zolotarov 提交于
      Add "read-mostly" qualifier to the following variables in
      smp.h:
      
       - cpu_sibling_map
       - cpu_core_map
       - cpu_llc_shared_map
       - cpu_llc_id
       - cpu_number
       - x86_cpu_to_apicid
       - x86_bios_cpu_apicid
       - x86_cpu_to_logical_apicid
      
      As long as all the variables above are only written during the
      initialization, this change is meant to prevent the false
      sharing. More specifically, on vSMP Foundation platform
      x86_cpu_to_apicid shared the same internode_cache_line with
      frequently written lapic_events.
      
      From the analysis of the first 33 per_cpu variables out of 219
      (memories they describe, to be more specific) the 8 have read_mostly
      nature (tlb_vector_offset, cpu_loops_per_jiffy, xen_debug_irq, etc.)
      and 25 are frequently written (irq_stack_union, gdt_page,
      exception_stacks, idt_desc, etc.).
      
      Assuming that the spread of the rest of the per_cpu variables is
      similar, identifying the read mostly memories will make more sense
      in terms of long-term code maintenance comparing to identifying
      frequently written memories.
      Signed-off-by: NVlad Zolotarov <vlad@scalemp.com>
      Acked-by: NShai Fultheim <shai@scalemp.com>
      Cc: Shai Fultheim (Shai@ScaleMP.com) <Shai@scalemp.com>
      Cc: ido@wizery.com
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1719258.EYKzE4Zbq5@vladSigned-off-by: NIngo Molnar <mingo@kernel.org>
      0816b0f0
    • I
      x86: Define early read-mostly per-cpu macros · c35f7741
      Ido Yariv 提交于
      Some read-mostly per-cpu data may need to be declared or defined
      early, so it can be initialized and accessed before per_cpu
      areas are allocated.
      
      Only the data that resides in the per_cpu areas should be
      read-mostly, as there is little benefit in optimizing cache
      lines on initialization.
      Signed-off-by: NIdo Yariv <ido@wizery.com>
      [ Added the missing declarations in !SMP code. ]
      Signed-off-by: NVlad Zolotarov <vlad@scalemp.com>
      Acked-by: NShai Fultheim <shai@scalemp.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/46188571.ddB8aVQYWo@vladSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c35f7741
  3. 08 6月, 2012 4 次提交
  4. 06 6月, 2012 17 次提交
  5. 02 6月, 2012 12 次提交
  6. 01 6月, 2012 3 次提交
    • S
      ftrace/x86: Do not change stacks in DEBUG when calling lockdep · 5963e317
      Steven Rostedt 提交于
      When both DYNAMIC_FTRACE and LOCKDEP are set, the TRACE_IRQS_ON/OFF
      will call into the lockdep code. The lockdep code can call lots of
      functions that may be traced by ftrace. When ftrace is updating its
      code and hits a breakpoint, the breakpoint handler will call into
      lockdep. If lockdep happens to call a function that also has a breakpoint
      attached, it will jump back into the breakpoint handler resetting
      the stack to the debug stack and corrupt the contents currently on
      that stack.
      
      The 'do_sym' call that calls do_int3() is protected by modifying the
      IST table to point to a different location if another breakpoint is
      hit. But the TRACE_IRQS_OFF/ON are outside that protection, and if
      a breakpoint is hit from those, the stack will get corrupted, and
      the kernel will crash:
      
      [ 1013.243754] BUG: unable to handle kernel NULL pointer dereference at 0000000000000002
      [ 1013.272665] IP: [<ffff880145cc0000>] 0xffff880145cbffff
      [ 1013.285186] PGD 1401b2067 PUD 14324c067 PMD 0
      [ 1013.298832] Oops: 0010 [#1] PREEMPT SMP
      [ 1013.310600] CPU 2
      [ 1013.317904] Modules linked in: ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables crc32c_intel ghash_clmulni_intel microcode usb_debug serio_raw pcspkr iTCO_wdt i2c_i801 iTCO_vendor_support e1000e nfsd nfs_acl auth_rpcgss lockd sunrpc i915 video i2c_algo_bit drm_kms_helper drm i2c_core [last unloaded: scsi_wait_scan]
      [ 1013.401848]
      [ 1013.407399] Pid: 112, comm: kworker/2:1 Not tainted 3.4.0+ #30
      [ 1013.437943] RIP: 8eb8:[<ffff88014630a000>]  [<ffff88014630a000>] 0xffff880146309fff
      [ 1013.459871] RSP: ffffffff8165e919:ffff88014780f408  EFLAGS: 00010046
      [ 1013.477909] RAX: 0000000000000001 RBX: ffffffff81104020 RCX: 0000000000000000
      [ 1013.499458] RDX: ffff880148008ea8 RSI: ffffffff8131ef40 RDI: ffffffff82203b20
      [ 1013.521612] RBP: ffffffff81005751 R08: 0000000000000000 R09: 0000000000000000
      [ 1013.543121] R10: ffffffff82cdc318 R11: 0000000000000000 R12: ffff880145cc0000
      [ 1013.564614] R13: ffff880148008eb8 R14: 0000000000000002 R15: ffff88014780cb40
      [ 1013.586108] FS:  0000000000000000(0000) GS:ffff880148000000(0000) knlGS:0000000000000000
      [ 1013.609458] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      [ 1013.627420] CR2: 0000000000000002 CR3: 0000000141f10000 CR4: 00000000001407e0
      [ 1013.649051] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [ 1013.670724] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      [ 1013.692376] Process kworker/2:1 (pid: 112, threadinfo ffff88013fe0e000, task ffff88014020a6a0)
      [ 1013.717028] Stack:
      [ 1013.724131]  ffff88014780f570 ffff880145cc0000 0000400000004000 0000000000000000
      [ 1013.745918]  cccccccccccccccc ffff88014780cca8 ffffffff811072bb ffffffff81651627
      [ 1013.767870]  ffffffff8118f8a7 ffffffff811072bb ffffffff81f2b6c5 ffffffff81f11bdb
      [ 1013.790021] Call Trace:
      [ 1013.800701] Code: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a <e7> d7 64 81 ff ff ff ff 01 00 00 00 00 00 00 00 65 d9 64 81 ff
      [ 1013.861443] RIP  [<ffff88014630a000>] 0xffff880146309fff
      [ 1013.884466]  RSP <ffff88014780f408>
      [ 1013.901507] CR2: 0000000000000002
      
      The solution was to reuse the NMI functions that change the IDT table to make the debug
      stack keep its current stack (in kernel mode) when hitting a breakpoint:
      
        call debug_stack_set_zero
        TRACE_IRQS_ON
        call debug_stack_reset
      
      If the TRACE_IRQS_ON happens to hit a breakpoint then it will keep the current stack
      and not crash the box.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5963e317
    • S
      x86: Allow nesting of the debug stack IDT setting · f8988175
      Steven Rostedt 提交于
      When the NMI handler runs, it checks if it preempted a debug handler
      and if that handler is using the debug stack. If it is, it changes the
      IDT table not to update the stack, otherwise it will reset the debug
      stack and corrupt the debug handler it preempted.
      
      Now that ftrace uses breakpoints to change functions from nops to
      callers, many more places may hit a breakpoint. Unfortunately this
      includes some of the calls that lockdep performs. Which causes issues
      with the debug stack. It too needs to change the debug stack before
      tracing (if called from the debug handler).
      
      Allow the debug_stack_set_zero() and debug_stack_reset() to be nested
      so that the debug handlers can take advantage of them too.
      
      [ Used this_cpu_*() over __get_cpu_var() as suggested by H. Peter Anvin ]
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f8988175
    • S
      x86: Reset the debug_stack update counter · c0525a69
      Steven Rostedt 提交于
      When an NMI goes off and it sees that it preempted the debug stack,
      to keep the debug stack safe, it changes the IDT to point to one that
      does not modify the stack on breakpoint (to allow breakpoints in NMIs).
      
      But the variable that gets set to know to undo it on exit never gets
      cleared on exit. Thus every NMI will reset it on exit the first time
      it is done even if it does not need to be reset.
      
      [ Added H. Peter Anvin's suggestion to use this_cpu_read/write ]
      
      Cc: <stable@vger.kernel.org> # v3.3
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c0525a69