1. 21 7月, 2015 1 次提交
  2. 04 2月, 2015 1 次提交
  3. 10 8月, 2014 1 次提交
  4. 08 8月, 2014 1 次提交
  5. 31 7月, 2014 7 次提交
    • D
      x86/mm: Set TLB flush tunable to sane value (33) · a5102476
      Dave Hansen 提交于
      This has been run through Intel's LKP tests across a wide range
      of modern sytems and workloads and it wasn't shown to make a
      measurable performance difference positive or negative.
      
      Now that we have some shiny new tracepoints, we can actually
      figure out what the heck is going on.
      
      During a kernel compile, 60% of the flush_tlb_mm_range() calls
      are for a single page.  It breaks down like this:
      
       size   percent  percent<=
        V        V        V
      GLOBAL:   2.20%   2.20% avg cycles:  2283
           1:  56.92%  59.12% avg cycles:  1276
           2:  13.78%  72.90% avg cycles:  1505
           3:   8.26%  81.16% avg cycles:  1880
           4:   7.41%  88.58% avg cycles:  2447
           5:   1.73%  90.31% avg cycles:  2358
           6:   1.32%  91.63% avg cycles:  2563
           7:   1.14%  92.77% avg cycles:  2862
           8:   0.62%  93.39% avg cycles:  3542
           9:   0.08%  93.47% avg cycles:  3289
          10:   0.43%  93.90% avg cycles:  3570
          11:   0.20%  94.10% avg cycles:  3767
          12:   0.08%  94.18% avg cycles:  3996
          13:   0.03%  94.20% avg cycles:  4077
          14:   0.02%  94.23% avg cycles:  4836
          15:   0.04%  94.26% avg cycles:  5699
          16:   0.06%  94.32% avg cycles:  5041
          17:   0.57%  94.89% avg cycles:  5473
          18:   0.02%  94.91% avg cycles:  5396
          19:   0.03%  94.95% avg cycles:  5296
          20:   0.02%  94.96% avg cycles:  6749
          21:   0.18%  95.14% avg cycles:  6225
          22:   0.01%  95.15% avg cycles:  6393
          23:   0.01%  95.16% avg cycles:  6861
          24:   0.12%  95.28% avg cycles:  6912
          25:   0.05%  95.32% avg cycles:  7190
          26:   0.01%  95.33% avg cycles:  7793
          27:   0.01%  95.34% avg cycles:  7833
          28:   0.01%  95.35% avg cycles:  8253
          29:   0.08%  95.42% avg cycles:  8024
          30:   0.03%  95.45% avg cycles:  9670
          31:   0.01%  95.46% avg cycles:  8949
          32:   0.01%  95.46% avg cycles:  9350
          33:   3.11%  98.57% avg cycles:  8534
          34:   0.02%  98.60% avg cycles: 10977
          35:   0.02%  98.62% avg cycles: 11400
      
      We get in to dimishing returns pretty quickly.  On pre-IvyBridge
      CPUs, we used to set the limit at 8 pages, and it was set at 128
      on IvyBrige.  That 128 number looks pretty silly considering that
      less than 0.5% of the flushes are that large.
      
      The previous code tried to size this number based on the size of
      the TLB.  Good idea, but it's error-prone, needs maintenance
      (which it didn't get up to now), and probably would not matter in
      practice much.
      
      Settting it to 33 means that we cover the mallopt
      M_TRIM_THRESHOLD, which is the most universally common size to do
      flushes.
      
      That's the short version.  Here's the long one for why I chose 33:
      
      1. These numbers have a constant bias in the timestamps from the
         tracing.  Probably counts for a couple hundred cycles in each of
         these tests, but it should be fairly _even_ across all of them.
         The smallest delta between the tracepoints I have ever seen is
         335 cycles.  This is one reason the cycles/page cost goes down in
         general as the flushes get larger.  The true cost is nearer to
         100 cycles.
      2. A full flush is more expensive than a single invlpg, but not
         by much (single percentages).
      3. A dtlb miss is 17.1ns (~45 cycles) and a itlb miss is 13.0ns
         (~34 cycles).  At those rates, refilling the 512-entry dTLB takes
         22,000 cycles.
      4. 22,000 cycles is approximately the equivalent of doing 85
         invlpg operations.  But, the odds are that the TLB can
         actually be filled up faster than that because TLB misses that
         are close in time also tend to leverage the same caches.
      6. ~98% of flushes are <=33 pages.  There are a lot of flushes of
         33 pages, probably because libc's M_TRIM_THRESHOLD is set to
         128k (32 pages)
      7. I've found no consistent data to support changing the IvyBridge
         vs. SandyBridge tunable by a factor of 16
      
      I used the performance counters on this hardware (IvyBridge i5-3320M)
      to figure out the tlb miss costs:
      
      ocperf.py stat -e dtlb_load_misses.walk_duration,dtlb_load_misses.walk_completed,dtlb_store_misses.walk_duration,dtlb_store_misses.walk_completed,itlb_misses.walk_duration,itlb_misses.walk_completed,itlb.itlb_flush
      
           7,720,030,970      dtlb_load_misses_walk_duration                                    [57.13%]
             169,856,353      dtlb_load_misses_walk_completed                                    [57.15%]
             708,832,859      dtlb_store_misses_walk_duration                                    [57.17%]
              19,346,823      dtlb_store_misses_walk_completed                                    [57.17%]
           2,779,687,402      itlb_misses_walk_duration                                    [57.15%]
              82,241,148      itlb_misses_walk_completed                                    [57.13%]
                 770,717      itlb_itlb_flush                                              [57.11%]
      
      Show that a dtlb miss is 17.1ns (~45 cycles) and a itlb miss is 13.0ns
      (~34 cycles).  At those rates, refilling the 512-entry dTLB takes
      22,000 cycles.  On a SandyBridge system with more cores and larger
      caches, those are dtlb=13.4ns and itlb=9.5ns.
      
      cat perf.stat.txt | perl -pe 's/,//g'
      	| awk '/itlb_misses_walk_duration/ { icyc+=$1 }
      		/itlb_misses_walk_completed/ { imiss+=$1 }
      		/dtlb_.*_walk_duration/ { dcyc+=$1 }
      		/dtlb_.*.*completed/ { dmiss+=$1 }
      		END {print "itlb cyc/miss: ", icyc/imiss, " dtlb cyc/miss: ", dcyc/dmiss, "   -----    ", icyc,imiss, dcyc,dmiss }
      
      On Westmere CPUs, the counters to use are: itlb_flush,itlb_misses.walk_cycles,itlb_misses.any,dtlb_misses.walk_cycles,dtlb_misses.any
      
      The assumptions that this code went in under:
      https://lkml.org/lkml/2012/6/12/119 say that a flush and a refill are
      about 100ns.  Being generous, that is over by a factor of 6 on the
      refill side, although it is fairly close on the cost of an invlpg.
      An increase of a single invlpg operation seems to lengthen the flush
      range operation by about 200 cycles.  Here is one example of the data
      collected for flushing 10 and 11 pages (full data are below):
      
          10:   0.43%  93.90% avg cycles:  3570 cycles/page:  357 samples: 4714
          11:   0.20%  94.10% avg cycles:  3767 cycles/page:  342 samples: 2145
      
      How to generate this table:
      
      	echo 10000 > /sys/kernel/debug/tracing/buffer_size_kb
      	echo x86-tsc > /sys/kernel/debug/tracing/trace_clock
      	echo 'reason != 0' > /sys/kernel/debug/tracing/events/tlb/tlb_flush/filter
      	echo 1 > /sys/kernel/debug/tracing/events/tlb/tlb_flush/enable
      
      Pipe the trace output in to this script:
      
      	http://sr71.net/~dave/intel/201402-tlb/trace-time-diff-process.pl.txt
      
      Note that these data were gathered with the invlpg threshold set to
      150 pages.  Only data points with >=50 of samples were printed:
      
      Flush    % of     %<=
      in       flush    this
      pages      es     size
      ------------------------------------------------------------------------------
          -1:   2.20%   2.20% avg cycles:  2283 cycles/page: xxxx samples: 23960
           1:  56.92%  59.12% avg cycles:  1276 cycles/page: 1276 samples: 620895
           2:  13.78%  72.90% avg cycles:  1505 cycles/page:  752 samples: 150335
           3:   8.26%  81.16% avg cycles:  1880 cycles/page:  626 samples: 90131
           4:   7.41%  88.58% avg cycles:  2447 cycles/page:  611 samples: 80877
           5:   1.73%  90.31% avg cycles:  2358 cycles/page:  471 samples: 18885
           6:   1.32%  91.63% avg cycles:  2563 cycles/page:  427 samples: 14397
           7:   1.14%  92.77% avg cycles:  2862 cycles/page:  408 samples: 12441
           8:   0.62%  93.39% avg cycles:  3542 cycles/page:  442 samples: 6721
           9:   0.08%  93.47% avg cycles:  3289 cycles/page:  365 samples: 917
          10:   0.43%  93.90% avg cycles:  3570 cycles/page:  357 samples: 4714
          11:   0.20%  94.10% avg cycles:  3767 cycles/page:  342 samples: 2145
          12:   0.08%  94.18% avg cycles:  3996 cycles/page:  333 samples: 864
          13:   0.03%  94.20% avg cycles:  4077 cycles/page:  313 samples: 289
          14:   0.02%  94.23% avg cycles:  4836 cycles/page:  345 samples: 236
          15:   0.04%  94.26% avg cycles:  5699 cycles/page:  379 samples: 390
          16:   0.06%  94.32% avg cycles:  5041 cycles/page:  315 samples: 643
          17:   0.57%  94.89% avg cycles:  5473 cycles/page:  321 samples: 6229
          18:   0.02%  94.91% avg cycles:  5396 cycles/page:  299 samples: 224
          19:   0.03%  94.95% avg cycles:  5296 cycles/page:  278 samples: 367
          20:   0.02%  94.96% avg cycles:  6749 cycles/page:  337 samples: 185
          21:   0.18%  95.14% avg cycles:  6225 cycles/page:  296 samples: 1964
          22:   0.01%  95.15% avg cycles:  6393 cycles/page:  290 samples: 83
          23:   0.01%  95.16% avg cycles:  6861 cycles/page:  298 samples: 61
          24:   0.12%  95.28% avg cycles:  6912 cycles/page:  288 samples: 1307
          25:   0.05%  95.32% avg cycles:  7190 cycles/page:  287 samples: 533
          26:   0.01%  95.33% avg cycles:  7793 cycles/page:  299 samples: 94
          27:   0.01%  95.34% avg cycles:  7833 cycles/page:  290 samples: 66
          28:   0.01%  95.35% avg cycles:  8253 cycles/page:  294 samples: 73
          29:   0.08%  95.42% avg cycles:  8024 cycles/page:  276 samples: 846
          30:   0.03%  95.45% avg cycles:  9670 cycles/page:  322 samples: 296
          31:   0.01%  95.46% avg cycles:  8949 cycles/page:  288 samples: 79
          32:   0.01%  95.46% avg cycles:  9350 cycles/page:  292 samples: 60
          33:   3.11%  98.57% avg cycles:  8534 cycles/page:  258 samples: 33936
          34:   0.02%  98.60% avg cycles: 10977 cycles/page:  322 samples: 268
          35:   0.02%  98.62% avg cycles: 11400 cycles/page:  325 samples: 177
          36:   0.01%  98.63% avg cycles: 11504 cycles/page:  319 samples: 161
          37:   0.02%  98.65% avg cycles: 11596 cycles/page:  313 samples: 182
          38:   0.02%  98.66% avg cycles: 11850 cycles/page:  311 samples: 195
          39:   0.01%  98.68% avg cycles: 12158 cycles/page:  311 samples: 128
          40:   0.01%  98.68% avg cycles: 11626 cycles/page:  290 samples: 78
          41:   0.04%  98.73% avg cycles: 11435 cycles/page:  278 samples: 477
          42:   0.01%  98.73% avg cycles: 12571 cycles/page:  299 samples: 74
          43:   0.01%  98.74% avg cycles: 12562 cycles/page:  292 samples: 78
          44:   0.01%  98.75% avg cycles: 12991 cycles/page:  295 samples: 108
          45:   0.01%  98.76% avg cycles: 13169 cycles/page:  292 samples: 78
          46:   0.02%  98.78% avg cycles: 12891 cycles/page:  280 samples: 261
          47:   0.01%  98.79% avg cycles: 13099 cycles/page:  278 samples: 67
          48:   0.01%  98.80% avg cycles: 13851 cycles/page:  288 samples: 77
          49:   0.01%  98.80% avg cycles: 13749 cycles/page:  280 samples: 66
          50:   0.01%  98.81% avg cycles: 13949 cycles/page:  278 samples: 73
          52:   0.00%  98.82% avg cycles: 14243 cycles/page:  273 samples: 52
          54:   0.01%  98.83% avg cycles: 15312 cycles/page:  283 samples: 87
          55:   0.01%  98.84% avg cycles: 15197 cycles/page:  276 samples: 109
          56:   0.02%  98.86% avg cycles: 15234 cycles/page:  272 samples: 208
          57:   0.00%  98.86% avg cycles: 14888 cycles/page:  261 samples: 53
          58:   0.01%  98.87% avg cycles: 15037 cycles/page:  259 samples: 59
          59:   0.01%  98.87% avg cycles: 15752 cycles/page:  266 samples: 63
          62:   0.00%  98.89% avg cycles: 16222 cycles/page:  261 samples: 54
          64:   0.02%  98.91% avg cycles: 17179 cycles/page:  268 samples: 248
          65:   0.12%  99.03% avg cycles: 18762 cycles/page:  288 samples: 1324
          85:   0.00%  99.10% avg cycles: 21649 cycles/page:  254 samples: 50
         127:   0.01%  99.18% avg cycles: 32397 cycles/page:  255 samples: 75
         128:   0.13%  99.31% avg cycles: 31711 cycles/page:  247 samples: 1466
         129:   0.18%  99.49% avg cycles: 33017 cycles/page:  255 samples: 1927
         181:   0.33%  99.84% avg cycles:  2489 cycles/page:   13 samples: 3547
         256:   0.05%  99.91% avg cycles:  2305 cycles/page:    9 samples: 550
         512:   0.03%  99.95% avg cycles:  2133 cycles/page:    4 samples: 304
        1512:   0.01%  99.99% avg cycles:  3038 cycles/page:    2 samples: 65
      
      Here are the tlb counters during a 10-second slice of a kernel compile
      for a SandyBridge system.  It's better than IvyBridge, but probably
      due to the larger caches since this was one of the 'X' extreme parts.
      
          10,873,007,282      dtlb_load_misses_walk_duration
             250,711,333      dtlb_load_misses_walk_completed
           1,212,395,865      dtlb_store_misses_walk_duration
              31,615,772      dtlb_store_misses_walk_completed
           5,091,010,274      itlb_misses_walk_duration
             163,193,511      itlb_misses_walk_completed
               1,321,980      itlb_itlb_flush
      
            10.008045158 seconds time elapsed
      
      # cat perf.stat.1392743721.txt | perl -pe 's/,//g' | awk '/itlb_misses_walk_duration/ { icyc+=$1 } /itlb_misses_walk_completed/ { imiss+=$1 } /dtlb_.*_walk_duration/ { dcyc+=$1 } /dtlb_.*.*completed/ { dmiss+=$1 } END {print "itlb cyc/miss: ", icyc/imiss/3.3, " dtlb cyc/miss: ", dcyc/dmiss/3.3, "   -----    ", icyc,imiss, dcyc,dmiss }'
      itlb ns/miss:  9.45338  dtlb ns/miss:  12.9716
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Link: http://lkml.kernel.org/r/20140731154103.10C1115E@viggo.jf.intel.comAcked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      a5102476
    • D
      x86/mm: New tunable for single vs full TLB flush · 2d040a1c
      Dave Hansen 提交于
      Most of the logic here is in the documentation file.  Please take
      a look at it.
      
      I know we've come full-circle here back to a tunable, but this
      new one is *WAY* simpler.  I challenge anyone to describe in one
      sentence how the old one worked.  Here's the way the new one
      works:
      
      	If we are flushing more pages than the ceiling, we use
      	the full flush, otherwise we use per-page flushes.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Link: http://lkml.kernel.org/r/20140731154101.12B52CAF@viggo.jf.intel.comAcked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      2d040a1c
    • D
      x86/mm: Add tracepoints for TLB flushes · d17d8f9d
      Dave Hansen 提交于
      We don't have any good way to figure out what kinds of flushes
      are being attempted.  Right now, we can try to use the vm
      counters, but those only tell us what we actually did with the
      hardware (one-by-one vs full) and don't tell us what was actually
      _requested_.
      
      This allows us to select out "interesting" TLB flushes that we
      might want to optimize (like the ranged ones) and ignore the ones
      that we have very little control over (the ones at context
      switch).
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Link: http://lkml.kernel.org/r/20140731154059.4C96CBA5@viggo.jf.intel.comAcked-by: NRik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      d17d8f9d
    • D
      x86/mm: Unify remote INVLPG code · a23421f1
      Dave Hansen 提交于
      There are currently three paths through the remote flush code:
      
      1. full invalidation
      2. single page invalidation using invlpg
      3. ranged invalidation using invlpg
      
      This takes 2 and 3 and combines them in to a single path by
      making the single-page one just be the start and end be start
      plus a single page.  This makes placement of our tracepoint easier.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Link: http://lkml.kernel.org/r/20140731154058.E0F90408@viggo.jf.intel.com
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      a23421f1
    • D
      x86/mm: Fix missed global TLB flush stat · 9dfa6dee
      Dave Hansen 提交于
      If we take the
      
      	if (end == TLB_FLUSH_ALL || vmflag & VM_HUGETLB) {
      		local_flush_tlb();
      		goto out;
      	}
      
      path out of flush_tlb_mm_range(), we will have flushed the tlb,
      but not incremented NR_TLB_LOCAL_FLUSH_ALL.  This unifies the
      way out of the function so that we always take a single path when
      doing a full tlb flush.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Link: http://lkml.kernel.org/r/20140731154056.FF763B76@viggo.jf.intel.comAcked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      9dfa6dee
    • D
      x86/mm: Rip out complicated, out-of-date, buggy TLB flushing · e9f4e0a9
      Dave Hansen 提交于
      I think the flush_tlb_mm_range() code that tries to tune the
      flush sizes based on the CPU needs to get ripped out for
      several reasons:
      
      1. It is obviously buggy.  It uses mm->total_vm to judge the
         task's footprint in the TLB.  It should certainly be using
         some measure of RSS, *NOT* ->total_vm since only resident
         memory can populate the TLB.
      2. Haswell, and several other CPUs are missing from the
         intel_tlb_flushall_shift_set() function.  Thus, it has been
         demonstrated to bitrot quickly in practice.
      3. It is plain wrong in my vm:
      	[    0.037444] Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0
      	[    0.037444] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0
      	[    0.037444] tlb_flushall_shift: 6
         Which leads to it to never use invlpg.
      4. The assumptions about TLB refill costs are wrong:
      	http://lkml.kernel.org/r/1337782555-8088-3-git-send-email-alex.shi@intel.com
          (more on this in later patches)
      5. I can not reproduce the original data: https://lkml.org/lkml/2012/5/17/59
         I believe the sample times were too short.  Running the
         benchmark in a loop yields times that vary quite a bit.
      
      Note that this leaves us with a static ceiling of 1 page.  This
      is a conservative, dumb setting, and will be revised in a later
      patch.
      
      This also removes the code which attempts to predict whether we
      are flushing data or instructions.  We expect instruction flushes
      to be relatively rare and not worth tuning for explicitly.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Link: http://lkml.kernel.org/r/20140731154055.ABC88E89@viggo.jf.intel.comAcked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      e9f4e0a9
    • D
      x86/mm: Clean up the TLB flushing code · 4995ab9c
      Dave Hansen 提交于
      The
      
      	if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids)
      
      line of code is not exactly the easiest to audit, especially when
      it ends up at two different indentation levels.  This eliminates
      one of the the copy-n-paste versions.  It also gives us a unified
      exit point for each path through this function.  We need this in
      a minute for our tracepoint.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Link: http://lkml.kernel.org/r/20140731154054.44F1CDDC@viggo.jf.intel.comAcked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      4995ab9c
  6. 25 1月, 2014 3 次提交
  7. 12 9月, 2013 2 次提交
    • D
      mm: vmstats: track TLB flush stats on UP too · 6df46865
      Dave Hansen 提交于
      The previous patch doing vmstats for TLB flushes ("mm: vmstats: tlb flush
      counters") effectively missed UP since arch/x86/mm/tlb.c is only compiled
      for SMP.
      
      UP systems do not do remote TLB flushes, so compile those counters out on
      UP.
      
      arch/x86/kernel/cpu/mtrr/generic.c calls __flush_tlb() directly.  This is
      probably an optimization since both the mtrr code and __flush_tlb() write
      cr4.  It would probably be safe to make that a flush_tlb_all() (and then
      get these statistics), but the mtrr code is ancient and I'm hesitant to
      touch it other than to just stick in the counters.
      
      [akpm@linux-foundation.org: tweak comments]
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6df46865
    • D
      mm: vmstats: tlb flush counters · 9824cf97
      Dave Hansen 提交于
      I was investigating some TLB flush scaling issues and realized that we do
      not have any good methods for figuring out how many TLB flushes we are
      doing.
      
      It would be nice to be able to do these in generic code, but the
      arch-independent calls don't explicitly specify whether we actually need
      to do remote flushes or not.  In the end, we really need to know if we
      actually _did_ global vs.  local invalidations, so that leaves us with few
      options other than to muck with the counters from arch-specific code.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9824cf97
  8. 25 1月, 2013 1 次提交
  9. 30 11月, 2012 1 次提交
  10. 15 11月, 2012 1 次提交
  11. 28 9月, 2012 1 次提交
  12. 07 9月, 2012 1 次提交
  13. 28 6月, 2012 7 次提交
    • A
      x86/tlb: do flush_tlb_kernel_range by 'invlpg' · effee4b9
      Alex Shi 提交于
      This patch do flush_tlb_kernel_range by 'invlpg'. The performance pay
      and gain was analyzed in previous patch
      (x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range).
      
      In the testing: http://lkml.org/lkml/2012/6/21/10
      
      The pay is mostly covered by long kernel path, but the gain is still
      quite clear, memory access in user APP can increase 30+% when kernel
      execute this funtion.
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-10-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      effee4b9
    • A
      x86/tlb: replace INVALIDATE_TLB_VECTOR by CALL_FUNCTION_VECTOR · 52aec330
      Alex Shi 提交于
      There are 32 INVALIDATE_TLB_VECTOR now in kernel. That is quite big
      amount of vector in IDT. But it is still not enough, since modern x86
      sever has more cpu number. That still causes heavy lock contention
      in TLB flushing.
      
      The patch using generic smp call function to replace it. That saved 32
      vector number in IDT, and resolved the lock contention in TLB
      flushing on large system.
      
      In the NHM EX machine 4P * 8cores * HT = 64 CPUs, hackbench pthread
      has 3% performance increase.
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-9-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      52aec330
    • A
      x86/tlb: enable tlb flush range support for x86 · 611ae8e3
      Alex Shi 提交于
      Not every tlb_flush execution moment is really need to evacuate all
      TLB entries, like in munmap, just few 'invlpg' is better for whole
      process performance, since it leaves most of TLB entries for later
      accessing.
      
      This patch also rewrite flush_tlb_range for 2 purposes:
      1, split it out to get flush_blt_mm_range function.
      2, clean up to reduce line breaking, thanks for Borislav's input.
      
      My micro benchmark 'mummap' http://lkml.org/lkml/2012/5/17/59
      show that the random memory access on other CPU has 0~50% speed up
      on a 2P * 4cores * HT NHM EP while do 'munmap'.
      
      Thanks Yongjie's testing on this patch:
      -------------
      I used Linux 3.4-RC6 w/ and w/o his patches as Xen dom0 and guest
      kernel.
      After running two benchmarks in Xen HVM guest, I found his patches
      brought about 1%~3% performance gain in 'kernel build' and 'netperf'
      testing, though the performance gain was not very stable in 'kernel
      build' testing.
      
      Some detailed testing results are below.
      
      Testing Environment:
      	Hardware: Romley-EP platform
      	Xen version: latest upstream
      	Linux kernel: 3.4-RC6
      	Guest vCPU number: 8
      	NIC: Intel 82599 (10GB bandwidth)
      
      In 'kernel build' testing in guest:
      	Command line  |  performance gain
          make -j 4      |    3.81%
          make -j 8      |    0.37%
          make -j 16     |    -0.52%
      
      In 'netperf' testing, we tested TCP_STREAM with default socket size
      16384 byte as large packet and 64 byte as small packet.
      I used several clients to add networking pressure, then 'netperf' server
      automatically generated several threads to response them.
      I also used large-size packet and small-size packet in the testing.
      	Packet size  |  Thread number | performance gain
      	16384 bytes  |      4       |   0.02%
      	16384 bytes  |      8       |   2.21%
      	16384 bytes  |      16      |   2.04%
      	64 bytes     |      4       |   1.07%
      	64 bytes     |      8       |   3.31%
      	64 bytes     |      16      |   0.71%
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-8-git-send-email-alex.shi@intel.comTested-by: NRen, Yongjie <yongjie.ren@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      611ae8e3
    • A
      x86/tlb: add tlb_flushall_shift knob into debugfs · 3df3212f
      Alex Shi 提交于
      kernel will replace cr3 rewrite with invlpg when
        tlb_flush_entries <= active_tlb_entries / 2^tlb_flushall_factor
      if tlb_flushall_factor is -1, kernel won't do this replacement.
      
      User can modify its value according to specific CPU/applications.
      
      Thanks for Borislav providing the help message of
      CONFIG_DEBUG_TLBFLUSH.
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-6-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      3df3212f
    • A
      x86/tlb: add tlb_flushall_shift for specific CPU · c4211f42
      Alex Shi 提交于
      Testing show different CPU type(micro architectures and NUMA mode) has
      different balance points between the TLB flush all and multiple invlpg.
      And there also has cases the tlb flush change has no any help.
      
      This patch give a interface to let x86 vendor developers have a chance
      to set different shift for different CPU type.
      
      like some machine in my hands, balance points is 16 entries on
      Romely-EP; while it is at 8 entries on Bloomfield NHM-EP; and is 256 on
      IVB mobile CPU. but on model 15 core2 Xeon using invlpg has nothing
      help.
      
      For untested machine, do a conservative optimization, same as NHM CPU.
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-5-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      c4211f42
    • A
      x86/tlb: fall back to flush all when meet a THP large page · d8dfe60d
      Alex Shi 提交于
      We don't need to flush large pages by PAGE_SIZE step, that just waste
      time. and actually, large page don't need 'invlpg' optimizing according
      to our micro benchmark. So, just flush whole TLB is enough for them.
      
      The following result is tested on a 2CPU * 4cores * 2HT NHM EP machine,
      with THP 'always' setting.
      
      Multi-thread testing, '-t' paramter is thread number:
                             without this patch 	with this patch
      ./mprotect -t 1         14ns                       13ns
      ./mprotect -t 2         13ns                       13ns
      ./mprotect -t 4         12ns                       11ns
      ./mprotect -t 8         14ns                       10ns
      ./mprotect -t 16        28ns                       28ns
      ./mprotect -t 32        54ns                       52ns
      ./mprotect -t 128       200ns                      200ns
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-4-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      d8dfe60d
    • A
      x86/flush_tlb: try flush_tlb_single one by one in flush_tlb_range · e7b52ffd
      Alex Shi 提交于
      x86 has no flush_tlb_range support in instruction level. Currently the
      flush_tlb_range just implemented by flushing all page table. That is not
      the best solution for all scenarios. In fact, if we just use 'invlpg' to
      flush few lines from TLB, we can get the performance gain from later
      remain TLB lines accessing.
      
      But the 'invlpg' instruction costs much of time. Its execution time can
      compete with cr3 rewriting, and even a bit more on SNB CPU.
      
      So, on a 512 4KB TLB entries CPU, the balance points is at:
      	(512 - X) * 100ns(assumed TLB refill cost) =
      		X(TLB flush entries) * 100ns(assumed invlpg cost)
      
      Here, X is 256, that is 1/2 of 512 entries.
      
      But with the mysterious CPU pre-fetcher and page miss handler Unit, the
      assumed TLB refill cost is far lower then 100ns in sequential access. And
      2 HT siblings in one core makes the memory access more faster if they are
      accessing the same memory. So, in the patch, I just do the change when
      the target entries is less than 1/16 of whole active tlb entries.
      Actually, I have no data support for the percentage '1/16', so any
      suggestions are welcomed.
      
      As to hugetlb, guess due to smaller page table, and smaller active TLB
      entries, I didn't see benefit via my benchmark, so no optimizing now.
      
      My micro benchmark show in ideal scenarios, the performance improves 70
      percent in reading. And in worst scenario, the reading/writing
      performance is similar with unpatched 3.4-rc4 kernel.
      
      Here is the reading data on my 2P * 4cores *HT NHM EP machine, with THP
      'always':
      
      multi thread testing, '-t' paramter is thread number:
      	       	        with patch   unpatched 3.4-rc4
      ./mprotect -t 1           14ns		24ns
      ./mprotect -t 2           13ns		22ns
      ./mprotect -t 4           12ns		19ns
      ./mprotect -t 8           14ns		16ns
      ./mprotect -t 16          28ns		26ns
      ./mprotect -t 32          54ns		51ns
      ./mprotect -t 128         200ns		199ns
      
      Single process with sequencial flushing and memory accessing:
      
      		       	with patch   unpatched 3.4-rc4
      ./mprotect		    7ns			11ns
      ./mprotect -p 4096  -l 8 -n 10240
      			    21ns		21ns
      
      [ hpa: http://lkml.kernel.org/r/1B4B44D9196EFF41AE41FDA404FC0A100BFF94@SHSMSX101.ccr.corp.intel.com
        has additional performance numbers. ]
      Signed-off-by: NAlex Shi <alex.shi@intel.com>
      Link: http://lkml.kernel.org/r/1340845344-27557-3-git-send-email-alex.shi@intel.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      e7b52ffd
  14. 15 5月, 2012 1 次提交
  15. 23 3月, 2012 1 次提交
  16. 15 3月, 2011 1 次提交
  17. 14 2月, 2011 1 次提交
  18. 18 11月, 2010 1 次提交
    • Y
      x86: Use online node real index in calulate_tbl_offset() · 9223081f
      Yinghai Lu 提交于
      Found a NUMA system that doesn't have RAM installed at the first
      socket which hangs while executing init scripts.
      
      bisected it to:
      
       | commit 93296720
       | Author: Shaohua Li <shaohua.li@intel.com>
       | Date:   Wed Oct 20 11:07:03 2010 +0800
       |
       |     x86: Spread tlb flush vector between nodes
      
      It turns out when first socket is not online it could have cpus on
      node1 tlb_offset set to bigger than NUM_INVALIDATE_TLB_VECTORS.
      
      That could affect systems like 4 sockets, but socket 2 doesn't
      have installed, sockets 3 will get too big tlb_offset.
      
      Need to use real online node idx.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NShaohua Li <shaohua.li@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <4CDEDE59.40603@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9223081f
  19. 01 11月, 2010 1 次提交
    • R
      x86, mm: Fix section mismatch in tlb.c · cf38d0ba
      Rakib Mullick 提交于
      Mark tlb_cpuhp_notify as __cpuinit. It's basically a callback
      function, which is called from __cpuinit init_smp_flash(). So -
      it's safe.
      
      We were warned by the following warning:
      
       WARNING: arch/x86/mm/built-in.o(.text+0x356d): Section mismatch
       in reference from the function tlb_cpuhp_notify() to the
       function .cpuinit.text:calculate_tlb_offset()
       The function tlb_cpuhp_notify() references
       the function __cpuinit calculate_tlb_offset().
       This is often because tlb_cpuhp_notify lacks a __cpuinit
       annotation or the annotation of calculate_tlb_offset is wrong.
      Signed-off-by: NRakib Mullick <rakib.mullick@gmail.com>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      Cc: Shaohua Li <shaohua.li@intel.com>
      LKML-Reference: <AANLkTinWQRG=HA9uB3ad0KAqRRTinL6L_4iKgF84coph@mail.gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cf38d0ba
  20. 21 10月, 2010 1 次提交
    • S
      x86: Spread tlb flush vector between nodes · 93296720
      Shaohua Li 提交于
      Currently flush tlb vector allocation is based on below equation:
      	sender = smp_processor_id() % 8
      This isn't optimal, CPUs from different node can have the same vector, this
      causes a lot of lock contention. Instead, we can assign the same vectors to
      CPUs from the same node, while different node has different vectors. This has
      below advantages:
      a. if there is lock contention, the lock contention is between CPUs from one
      node. This should be much cheaper than the contention between nodes.
      b. completely avoid lock contention between nodes. This especially benefits
      kswapd, which is the biggest user of tlb flush, since kswapd sets its affinity
      to specific node.
      
      In my test, this could reduce > 20% CPU overhead in extreme case.The test
      machine has 4 nodes and each node has 16 CPUs. I then bind each node's kswapd
      to the first CPU of the node. I run a workload with 4 sequential mmap file
      read thread. The files are empty sparse file. This workload will trigger a
      lot of page reclaim and tlbflush. The kswapd bind is to easy trigger the
      extreme tlb flush lock contention because otherwise kswapd keeps migrating
      between CPUs of a node and I can't get stable result. Sure in real workload,
      we can't always see so big tlb flush lock contention, but it's possible.
      
      [ hpa: folded in fix from Eric Dumazet to use this_cpu_read() ]
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      LKML-Reference: <1287544023.4571.8.camel@sli10-conroe.sh.intel.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      93296720
  21. 22 7月, 2010 1 次提交
  22. 18 2月, 2010 1 次提交
  23. 19 11月, 2009 1 次提交
    • J
      x86: Eliminate redundant/contradicting cache line size config options · 350f8f56
      Jan Beulich 提交于
      Rather than having X86_L1_CACHE_BYTES and X86_L1_CACHE_SHIFT
      (with inconsistent defaults), just having the latter suffices as
      the former can be easily calculated from it.
      
      To be consistent, also change X86_INTERNODE_CACHE_BYTES to
      X86_INTERNODE_CACHE_SHIFT, and set it to 7 (128 bytes) for NUMA
      to account for last level cache line size (which here matters
      more than L1 cache line size).
      
      Finally, make sure the default value for X86_L1_CACHE_SHIFT,
      when X86_GENERIC is selected, is being seen before that for the
      individual CPU model options (other than on x86-64, where
      GENERIC_CPU is part of the choice construct, X86_GENERIC is a
      separate option on ix86).
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Acked-by: NRavikiran Thirumalai <kiran@scalex86.org>
      Acked-by: NNick Piggin <npiggin@suse.de>
      LKML-Reference: <4AFD5710020000780001F8F0@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      350f8f56
  24. 24 9月, 2009 1 次提交
  25. 22 8月, 2009 1 次提交
    • L
      x86: don't call '->send_IPI_mask()' with an empty mask · b04e6373
      Linus Torvalds 提交于
      As noted in 83d349f3 ("x86: don't send
      an IPI to the empty set of CPU's"), some APIC's will be very unhappy
      with an empty destination mask.  That commit added a WARN_ON() for that
      case, and avoided the resulting problem, but didn't fix the underlying
      reason for why those empty mask cases happened.
      
      This fixes that, by checking the result of 'cpumask_andnot()' of the
      current CPU actually has any other CPU's left in the set of CPU's to be
      sent a TLB flush, and not calling down to the IPI code if the mask is
      empty.
      
      The reason this started happening at all is that we started passing just
      the CPU mask pointers around in commit 4595f962 ("x86: change
      flush_tlb_others to take a const struct cpumask"), and when we did that,
      the cpumask was no longer thread-local.
      
      Before that commit, flush_tlb_mm() used to create it's own copy of
      'mm->cpu_vm_mask' and pass that copy down to the low-level flush
      routines after having tested that it was not empty.  But after changing
      it to just pass down the CPU mask pointer, the lower level TLB flush
      routines would now get a pointer to that 'mm->cpu_vm_mask', and that
      could still change - and become empty - after the test due to other
      CPU's having flushed their own TLB's.
      
      See
      
      	http://bugzilla.kernel.org/show_bug.cgi?id=13933
      
      for details.
      Tested-by: NThomas Björnell <thomas.bjornell@gmail.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b04e6373