1. 22 3月, 2014 1 次提交
  2. 07 3月, 2014 7 次提交
  3. 13 2月, 2014 1 次提交
  4. 29 1月, 2014 2 次提交
  5. 22 1月, 2014 3 次提交
    • M
      mm: compaction: trace compaction begin and end · 0eb927c0
      Mel Gorman 提交于
      The broad goal of the series is to improve allocation success rates for
      huge pages through memory compaction, while trying not to increase the
      compaction overhead.  The original objective was to reintroduce
      capturing of high-order pages freed by the compaction, before they are
      split by concurrent activity.  However, several bugs and opportunities
      for simple improvements were found in the current implementation, mostly
      through extra tracepoints (which are however too ugly for now to be
      considered for sending).
      
      The patches mostly deal with two mechanisms that reduce compaction
      overhead, which is caching the progress of migrate and free scanners,
      and marking pageblocks where isolation failed to be skipped during
      further scans.
      
      Patch 1 (from mgorman) adds tracepoints that allow calculate time spent in
              compaction and potentially debug scanner pfn values.
      
      Patch 2 encapsulates the some functionality for handling deferred compactions
              for better maintainability, without a functional change
              type is not determined without being actually needed.
      
      Patch 3 fixes a bug where cached scanner pfn's are sometimes reset only after
              they have been read to initialize a compaction run.
      
      Patch 4 fixes a bug where scanners meeting is sometimes not properly detected
              and can lead to multiple compaction attempts quitting early without
              doing any work.
      
      Patch 5 improves the chances of sync compaction to process pageblocks that
              async compaction has skipped due to being !MIGRATE_MOVABLE.
      
      Patch 6 improves the chances of sync direct compaction to actually do anything
              when called after async compaction fails during allocation slowpath.
      
      The impact of patches were validated using mmtests's stress-highalloc
      benchmark with mmtests's stress-highalloc benchmark on a x86_64 machine
      with 4GB memory.
      
      Due to instability of the results (mostly related to the bugs fixed by
      patches 2 and 3), 10 iterations were performed, taking min,mean,max
      values for success rates and mean values for time and vmstat-based
      metrics.
      
      First, the default GFP_HIGHUSER_MOVABLE allocations were tested with the
      patches stacked on top of v3.13-rc2.  Patch 2 is OK to serve as baseline
      due to no functional changes in 1 and 2.  Comments below.
      
      stress-highalloc
                                   3.13-rc2              3.13-rc2              3.13-rc2              3.13-rc2              3.13-rc2
                                    2-nothp               3-nothp               4-nothp               5-nothp               6-nothp
      Success 1 Min          9.00 (  0.00%)       10.00 (-11.11%)       43.00 (-377.78%)       43.00 (-377.78%)       33.00 (-266.67%)
      Success 1 Mean        27.50 (  0.00%)       25.30 (  8.00%)       45.50 (-65.45%)       45.90 (-66.91%)       46.30 (-68.36%)
      Success 1 Max         36.00 (  0.00%)       36.00 (  0.00%)       47.00 (-30.56%)       48.00 (-33.33%)       52.00 (-44.44%)
      Success 2 Min         10.00 (  0.00%)        8.00 ( 20.00%)       46.00 (-360.00%)       45.00 (-350.00%)       35.00 (-250.00%)
      Success 2 Mean        26.40 (  0.00%)       23.50 ( 10.98%)       47.30 (-79.17%)       47.60 (-80.30%)       48.10 (-82.20%)
      Success 2 Max         34.00 (  0.00%)       33.00 (  2.94%)       48.00 (-41.18%)       50.00 (-47.06%)       54.00 (-58.82%)
      Success 3 Min         65.00 (  0.00%)       63.00 (  3.08%)       85.00 (-30.77%)       84.00 (-29.23%)       85.00 (-30.77%)
      Success 3 Mean        76.70 (  0.00%)       70.50 (  8.08%)       86.20 (-12.39%)       85.50 (-11.47%)       86.00 (-12.13%)
      Success 3 Max         87.00 (  0.00%)       86.00 (  1.15%)       88.00 ( -1.15%)       87.00 (  0.00%)       87.00 (  0.00%)
      
                  3.13-rc2    3.13-rc2    3.13-rc2    3.13-rc2    3.13-rc2
                   2-nothp     3-nothp     4-nothp     5-nothp     6-nothp
      User         6437.72     6459.76     5960.32     5974.55     6019.67
      System       1049.65     1049.09     1029.32     1031.47     1032.31
      Elapsed      1856.77     1874.48     1949.97     1994.22     1983.15
      
                                    3.13-rc2    3.13-rc2    3.13-rc2    3.13-rc2    3.13-rc2
                                     2-nothp     3-nothp     4-nothp     5-nothp     6-nothp
      Minor Faults                 253952267   254581900   250030122   250507333   250157829
      Major Faults                       420         407         506         530         530
      Swap Ins                             4           9           9           6           6
      Swap Outs                          398         375         345         346         333
      Direct pages scanned            197538      189017      298574      287019      299063
      Kswapd pages scanned           1809843     1801308     1846674     1873184     1861089
      Kswapd pages reclaimed         1806972     1798684     1844219     1870509     1858622
      Direct pages reclaimed          197227      188829      298380      286822      298835
      Kswapd efficiency                  99%         99%         99%         99%         99%
      Kswapd velocity                953.382     970.449     952.243     934.569     922.286
      Direct efficiency                  99%         99%         99%         99%         99%
      Direct velocity                104.058     101.832     153.961     143.200     148.205
      Percentage direct scans             9%          9%         13%         13%         13%
      Zone normal velocity           347.289     359.676     348.063     339.933     332.983
      Zone dma32 velocity            710.151     712.605     758.140     737.835     737.507
      Zone dma velocity                0.000       0.000       0.000       0.000       0.000
      Page writes by reclaim         557.600     429.000     353.600     426.400     381.800
      Page writes file                   159          53           7          79          48
      Page writes anon                   398         375         345         346         333
      Page reclaim immediate             825         644         411         575         420
      Sector Reads                   2781750     2769780     2878547     2939128     2910483
      Sector Writes                 12080843    12083351    12012892    12002132    12010745
      Page rescued immediate               0           0           0           0           0
      Slabs scanned                  1575654     1545344     1778406     1786700     1794073
      Direct inode steals               9657       10037       15795       14104       14645
      Kswapd inode steals              46857       46335       50543       50716       51796
      Kswapd skipped wait                  0           0           0           0           0
      THP fault alloc                     97          91          81          71          77
      THP collapse alloc                 456         506         546         544         565
      THP splits                           6           5           5           4           4
      THP fault fallback                   0           1           0           0           0
      THP collapse fail                   14          14          12          13          12
      Compaction stalls                 1006         980        1537        1536        1548
      Compaction success                 303         284         562         559         578
      Compaction failures                702         696         974         976         969
      Page migrate success           1177325     1070077     3927538     3781870     3877057
      Page migrate failure                 0           0           0           0           0
      Compaction pages isolated      2547248     2306457     8301218     8008500     8200674
      Compaction migrate scanned    42290478    38832618   153961130   154143900   159141197
      Compaction free scanned       89199429    79189151   356529027   351943166   356326727
      Compaction cost                   1566        1426        5312        5156        5294
      NUMA PTE updates                     0           0           0           0           0
      NUMA hint faults                     0           0           0           0           0
      NUMA hint local faults               0           0           0           0           0
      NUMA hint local percent            100         100         100         100         100
      NUMA pages migrated                  0           0           0           0           0
      AutoNUMA cost                        0           0           0           0           0
      
      Observations:
      
      - The "Success 3" line is allocation success rate with system idle
        (phases 1 and 2 are with background interference).  I used to get stable
        values around 85% with vanilla 3.11.  The lower min and mean values came
        with 3.12.  This was bisected to commit 81c0a2bb ("mm: page_alloc: fair
        zone allocator policy") As explained in comment for patch 3, I don't
        think the commit is wrong, but that it makes the effect of compaction
        bugs worse.  From patch 3 onwards, the results are OK and match the 3.11
        results.
      
      - Patch 4 also clearly helps phases 1 and 2, and exceeds any results
        I've seen with 3.11 (I didn't measure it that thoroughly then, but it
        was never above 40%).
      
      - Compaction cost and number of scanned pages is higher, especially due
        to patch 4.  However, keep in mind that patches 3 and 4 fix existing
        bugs in the current design of compaction overhead mitigation, they do
        not change it.  If overhead is found unacceptable, then it should be
        decreased differently (and consistently, not due to random conditions)
        than the current implementation does.  In contrast, patches 5 and 6
        (which are not strictly bug fixes) do not increase the overhead (but
        also not success rates).  This might be a limitation of the
        stress-highalloc benchmark as it's quite uniform.
      
      Another set of results is when configuring stress-highalloc t allocate
      with similar flags as THP uses:
       (GFP_HIGHUSER_MOVABLE|__GFP_NOMEMALLOC|__GFP_NORETRY|__GFP_NO_KSWAPD)
      
      stress-highalloc
                                   3.13-rc2              3.13-rc2              3.13-rc2              3.13-rc2              3.13-rc2
                                      2-thp                 3-thp                 4-thp                 5-thp                 6-thp
      Success 1 Min          2.00 (  0.00%)        7.00 (-250.00%)       18.00 (-800.00%)       19.00 (-850.00%)       26.00 (-1200.00%)
      Success 1 Mean        19.20 (  0.00%)       17.80 (  7.29%)       29.20 (-52.08%)       29.90 (-55.73%)       32.80 (-70.83%)
      Success 1 Max         27.00 (  0.00%)       29.00 ( -7.41%)       35.00 (-29.63%)       36.00 (-33.33%)       37.00 (-37.04%)
      Success 2 Min          3.00 (  0.00%)        8.00 (-166.67%)       21.00 (-600.00%)       21.00 (-600.00%)       32.00 (-966.67%)
      Success 2 Mean        19.30 (  0.00%)       17.90 (  7.25%)       32.20 (-66.84%)       32.60 (-68.91%)       35.70 (-84.97%)
      Success 2 Max         27.00 (  0.00%)       30.00 (-11.11%)       36.00 (-33.33%)       37.00 (-37.04%)       39.00 (-44.44%)
      Success 3 Min         62.00 (  0.00%)       62.00 (  0.00%)       85.00 (-37.10%)       75.00 (-20.97%)       64.00 ( -3.23%)
      Success 3 Mean        66.30 (  0.00%)       65.50 (  1.21%)       85.60 (-29.11%)       83.40 (-25.79%)       83.50 (-25.94%)
      Success 3 Max         70.00 (  0.00%)       69.00 (  1.43%)       87.00 (-24.29%)       86.00 (-22.86%)       87.00 (-24.29%)
      
                  3.13-rc2    3.13-rc2    3.13-rc2    3.13-rc2    3.13-rc2
                     2-thp       3-thp       4-thp       5-thp       6-thp
      User         6547.93     6475.85     6265.54     6289.46     6189.96
      System       1053.42     1047.28     1043.23     1042.73     1038.73
      Elapsed      1835.43     1821.96     1908.67     1912.74     1956.38
      
                                    3.13-rc2    3.13-rc2    3.13-rc2    3.13-rc2    3.13-rc2
                                       2-thp       3-thp       4-thp       5-thp       6-thp
      Minor Faults                 256805673   253106328   253222299   249830289   251184418
      Major Faults                       395         375         423         434         448
      Swap Ins                            12          10          10          12           9
      Swap Outs                          530         537         487         455         415
      Direct pages scanned             71859       86046      153244      152764      190713
      Kswapd pages scanned           1900994     1870240     1898012     1892864     1880520
      Kswapd pages reclaimed         1897814     1867428     1894939     1890125     1877924
      Direct pages reclaimed           71766       85908      153167      152643      190600
      Kswapd efficiency                  99%         99%         99%         99%         99%
      Kswapd velocity               1029.000    1067.782    1000.091     991.049     951.218
      Direct efficiency                  99%         99%         99%         99%         99%
      Direct velocity                 38.897      49.127      80.747      79.983      96.468
      Percentage direct scans             3%          4%          7%          7%          9%
      Zone normal velocity           351.377     372.494     348.910     341.689     335.310
      Zone dma32 velocity            716.520     744.414     731.928     729.343     712.377
      Zone dma velocity                0.000       0.000       0.000       0.000       0.000
      Page writes by reclaim         669.300     604.000     545.700     538.900     429.900
      Page writes file                   138          66          58          83          14
      Page writes anon                   530         537         487         455         415
      Page reclaim immediate             806         655         772         548         517
      Sector Reads                   2711956     2703239     2811602     2818248     2839459
      Sector Writes                 12163238    12018662    12038248    11954736    11994892
      Page rescued immediate               0           0           0           0           0
      Slabs scanned                  1385088     1388364     1507968     1513292     1558656
      Direct inode steals               1739        2564        4622        5496        6007
      Kswapd inode steals              47461       46406       47804       48013       48466
      Kswapd skipped wait                  0           0           0           0           0
      THP fault alloc                    110          82          84          69          70
      THP collapse alloc                 445         482         467         462         539
      THP splits                           6           5           4           5           3
      THP fault fallback                   3           0           0           0           0
      THP collapse fail                   15          14          14          14          13
      Compaction stalls                  659         685        1033        1073        1111
      Compaction success                 222         225         410         427         456
      Compaction failures                436         460         622         646         655
      Page migrate success            446594      439978     1085640     1095062     1131716
      Page migrate failure                 0           0           0           0           0
      Compaction pages isolated      1029475     1013490     2453074     2482698     2565400
      Compaction migrate scanned     9955461    11344259    24375202    27978356    30494204
      Compaction free scanned       27715272    28544654    80150615    82898631    85756132
      Compaction cost                    552         555        1344        1379        1436
      NUMA PTE updates                     0           0           0           0           0
      NUMA hint faults                     0           0           0           0           0
      NUMA hint local faults               0           0           0           0           0
      NUMA hint local percent            100         100         100         100         100
      NUMA pages migrated                  0           0           0           0           0
      AutoNUMA cost                        0           0           0           0           0
      
      There are some differences from the previous results for THP-like allocations:
      
      - Here, the bad result for unpatched kernel in phase 3 is much more
        consistent to be between 65-70% and not related to the "regression" in
        3.12.  Still there is the improvement from patch 4 onwards, which brings
        it on par with simple GFP_HIGHUSER_MOVABLE allocations.
      
      - Compaction costs have increased, but nowhere near as much as the
        non-THP case.  Again, the patches should be worth the gained
        determininsm.
      
      - Patches 5 and 6 somewhat increase the number of migrate-scanned pages.
         This is most likely due to __GFP_NO_KSWAPD flag, which means the cached
        pfn's and pageblock skip bits are not reset by kswapd that often (at
        least in phase 3 where no concurrent activity would wake up kswapd) and
        the patches thus help the sync-after-async compaction.  It doesn't
        however show that the sync compaction would help so much with success
        rates, which can be again seen as a limitation of the benchmark
        scenario.
      
      This patch (of 6):
      
      Add two tracepoints for compaction begin and end of a zone.  Using this it
      is possible to calculate how much time a workload is spending within
      compaction and potentially debug problems related to cached pfns for
      scanning.  In combination with the direct reclaim and slab trace points it
      should be possible to estimate most allocation-related overhead for a
      workload.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0eb927c0
    • M
      sched: add tracepoints related to NUMA task migration · 286549dc
      Mel Gorman 提交于
      This patch adds three tracepoints
       o trace_sched_move_numa	when a task is moved to a node
       o trace_sched_swap_numa	when a task is swapped with another task
       o trace_sched_stick_numa	when a numa-related migration fails
      
      The tracepoints allow the NUMA scheduler activity to be monitored and the
      following high-level metrics can be calculated
      
       o NUMA migrated stuck	 nr trace_sched_stick_numa
       o NUMA migrated idle	 nr trace_sched_move_numa
       o NUMA migrated swapped nr trace_sched_swap_numa
       o NUMA local swapped	 trace_sched_swap_numa src_nid == dst_nid (should never happen)
       o NUMA remote swapped	 trace_sched_swap_numa src_nid != dst_nid (should == NUMA migrated swapped)
       o NUMA group swapped	 trace_sched_swap_numa src_ngid == dst_ngid
      			 Maybe a small number of these are acceptable
      			 but a high number would be a major surprise.
      			 It would be even worse if bounces are frequent.
       o NUMA avg task migs.	 Average number of migrations for tasks
       o NUMA stddev task mig	 Self-explanatory
       o NUMA max task migs.	 Maximum number of migrations for a single task
      
      In general the intent of the tracepoints is to help diagnose problems
      where automatic NUMA balancing appears to be doing an excessive amount
      of useless work.
      
      [akpm@linux-foundation.org: remove semicolon-after-if, repair coding-style]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Alex Thorlton <athorlton@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      286549dc
    • M
      mm: numa: trace tasks that fail migration due to rate limiting · af1839d7
      Mel Gorman 提交于
      A low local/remote numa hinting fault ratio is potentially explained by
      failed migrations.  This patch adds a tracepoint that fires when
      migration fails due to migration rate limitation.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Alex Thorlton <athorlton@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      af1839d7
  6. 17 1月, 2014 1 次提交
  7. 15 1月, 2014 2 次提交
  8. 10 1月, 2014 1 次提交
  9. 09 1月, 2014 2 次提交
  10. 01 1月, 2014 1 次提交
  11. 23 12月, 2013 5 次提交
  12. 22 12月, 2013 1 次提交
    • T
      tracing: Add and use generic set_trigger_filter() implementation · bac5fb97
      Tom Zanussi 提交于
      Add a generic event_command.set_trigger_filter() op implementation and
      have the current set of trigger commands use it - this essentially
      gives them all support for filters.
      
      Syntactically, filters are supported by adding 'if <filter>' just
      after the command, in which case only events matching the filter will
      invoke the trigger.  For example, to add a filter to an
      enable/disable_event command:
      
          echo 'enable_event:system:event if common_pid == 999' > \
                    .../othersys/otherevent/trigger
      
      The above command will only enable the system:event event if the
      common_pid field in the othersys:otherevent event is 999.
      
      As another example, to add a filter to a stacktrace command:
      
          echo 'stacktrace if common_pid == 999' > \
                         .../somesys/someevent/trigger
      
      The above command will only trigger a stacktrace if the common_pid
      field in the event is 999.
      
      The filter syntax is the same as that described in the 'Event
      filtering' section of Documentation/trace/events.txt.
      
      Because triggers can now use filters, the trigger-invoking logic needs
      to be moved in those cases - e.g. for ftrace_raw_event_calls, if a
      trigger has a filter associated with it, the trigger invocation now
      needs to happen after the { assign; } part of the call, in order for
      the trigger condition to be tested.
      
      There's still a SOFT_DISABLED-only check at the top of e.g. the
      ftrace_raw_events function, so when an event is soft disabled but not
      because of the presence of a trigger, the original SOFT_DISABLED
      behavior remains unchanged.
      
      There's also a bit of trickiness in that some triggers need to avoid
      being invoked while an event is currently in the process of being
      logged, since the trigger may itself log data into the trace buffer.
      Thus we make sure the current event is committed before invoking those
      triggers.  To do that, we split the trigger invocation in two - the
      first part (event_triggers_call()) checks the filter using the current
      trace record; if a command has the post_trigger flag set, it sets a
      bit for itself in the return value, otherwise it directly invoks the
      trigger.  Once all commands have been either invoked or set their
      return flag, event_triggers_call() returns.  The current record is
      then either committed or discarded; if any commands have deferred
      their triggers, those commands are finally invoked following the close
      of the current event by event_triggers_post_call().
      
      To simplify the above and make it more efficient, the TRIGGER_COND bit
      is introduced, which is set only if a soft-disabled trigger needs to
      use the log record for filter testing or needs to wait until the
      current log record is closed.
      
      The syscall event invocation code is also changed in analogous ways.
      
      Because event triggers need to be able to create and free filters,
      this also adds a couple external wrappers for the existing
      create_filter and free_filter functions, which are too generic to be
      made extern functions themselves.
      
      Link: http://lkml.kernel.org/r/7164930759d8719ef460357f143d995406e4eead.1382622043.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      bac5fb97
  13. 21 12月, 2013 1 次提交
    • T
      tracing: Add basic event trigger framework · 85f2b082
      Tom Zanussi 提交于
      Add a 'trigger' file for each trace event, enabling 'trace event
      triggers' to be set for trace events.
      
      'trace event triggers' are patterned after the existing 'ftrace
      function triggers' implementation except that triggers are written to
      per-event 'trigger' files instead of to a single file such as the
      'set_ftrace_filter' used for ftrace function triggers.
      
      The implementation is meant to be entirely separate from ftrace
      function triggers, in order to keep the respective implementations
      relatively simple and to allow them to diverge.
      
      The event trigger functionality is built on top of SOFT_DISABLE
      functionality.  It adds a TRIGGER_MODE bit to the ftrace_event_file
      flags which is checked when any trace event fires.  Triggers set for a
      particular event need to be checked regardless of whether that event
      is actually enabled or not - getting an event to fire even if it's not
      enabled is what's already implemented by SOFT_DISABLE mode, so trigger
      mode directly reuses that.  Event trigger essentially inherit the soft
      disable logic in __ftrace_event_enable_disable() while adding a bit of
      logic and trigger reference counting via tm_ref on top of that in a
      new trace_event_trigger_enable_disable() function.  Because the base
      __ftrace_event_enable_disable() code now needs to be invoked from
      outside trace_events.c, a wrapper is also added for those usages.
      
      The triggers for an event are actually invoked via a new function,
      event_triggers_call(), and code is also added to invoke them for
      ftrace_raw_event calls as well as syscall events.
      
      The main part of the patch creates a new trace_events_trigger.c file
      to contain the trace event triggers implementation.
      
      The standard open, read, and release file operations are implemented
      here.
      
      The open() implementation sets up for the various open modes of the
      'trigger' file.  It creates and attaches the trigger iterator and sets
      up the command parser.  If opened for reading set up the trigger
      seq_ops.
      
      The read() implementation parses the event trigger written to the
      'trigger' file, looks up the trigger command, and passes it along to
      that event_command's func() implementation for command-specific
      processing.
      
      The release() implementation does whatever cleanup is needed to
      release the 'trigger' file, like releasing the parser and trigger
      iterator, etc.
      
      A couple of functions for event command registration and
      unregistration are added, along with a list to add them to and a mutex
      to protect them, as well as an (initially empty) registration function
      to add the set of commands that will be added by future commits, and
      call to it from the trace event initialization code.
      
      also added are a couple trigger-specific data structures needed for
      these implementations such as a trigger iterator and a struct for
      trigger-specific data.
      
      A couple structs consisting mostly of function meant to be implemented
      in command-specific ways, event_command and event_trigger_ops, are
      used by the generic event trigger command implementations.  They're
      being put into trace.h alongside the other trace_event data structures
      and functions, in the expectation that they'll be needed in several
      trace_event-related files such as trace_events_trigger.c and
      trace_events.c.
      
      The event_command.func() function is meant to be called by the trigger
      parsing code in order to add a trigger instance to the corresponding
      event.  It essentially coordinates adding a live trigger instance to
      the event, and arming the triggering the event.
      
      Every event_command func() implementation essentially does the
      same thing for any command:
      
         - choose ops - use the value of param to choose either a number or
           count version of event_trigger_ops specific to the command
         - do the register or unregister of those ops
         - associate a filter, if specified, with the triggering event
      
      The reg() and unreg() ops allow command-specific implementations for
      event_trigger_op registration and unregistration, and the
      get_trigger_ops() op allows command-specific event_trigger_ops
      selection to be parameterized.  When a trigger instance is added, the
      reg() op essentially adds that trigger to the triggering event and
      arms it, while unreg() does the opposite.  The set_filter() function
      is used to associate a filter with the trigger - if the command
      doesn't specify a set_filter() implementation, the command will ignore
      filters.
      
      Each command has an associated trigger_type, which serves double duty,
      both as a unique identifier for the command as well as a value that
      can be used for setting a trigger mode bit during trigger invocation.
      
      The signature of func() adds a pointer to the event_command struct,
      used to invoke those functions, along with a command_data param that
      can be passed to the reg/unreg functions.  This allows func()
      implementations to use command-specific blobs and supports code
      re-use.
      
      The event_trigger_ops.func() command corrsponds to the trigger 'probe'
      function that gets called when the triggering event is actually
      invoked.  The other functions are used to list the trigger when
      needed, along with a couple mundane book-keeping functions.
      
      This also moves event_file_data() into trace.h so it can be used
      outside of trace_events.c.
      
      Link: http://lkml.kernel.org/r/316d95061accdee070aac8e5750afba0192fa5b9.1382622043.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Idea-by: NSteve Rostedt <rostedt@goodmis.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      85f2b082
  14. 11 12月, 2013 1 次提交
  15. 08 12月, 2013 1 次提交
  16. 26 11月, 2013 1 次提交
    • S
      tracing: Allow events to have NULL strings · 4e58e547
      Steven Rostedt (Red Hat) 提交于
      If an TRACE_EVENT() uses __assign_str() or __get_str on a NULL pointer
      then the following oops will happen:
      
      BUG: unable to handle kernel NULL pointer dereference at   (null)
      IP: [<c127a17b>] strlen+0x10/0x1a
      *pde = 00000000 ^M
      Oops: 0000 [#1] PREEMPT SMP
      Modules linked in:
      CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.13.0-rc1-test+ #2
      Hardware name:                  /DG965MQ, BIOS MQ96510J.86A.0372.2006.0605.1717 06/05/2006^M
      task: f5cde9f0 ti: f5e5e000 task.ti: f5e5e000
      EIP: 0060:[<c127a17b>] EFLAGS: 00210046 CPU: 1
      EIP is at strlen+0x10/0x1a
      EAX: 00000000 EBX: c2472da8 ECX: ffffffff EDX: c2472da8
      ESI: c1c5e5fc EDI: 00000000 EBP: f5e5fe84 ESP: f5e5fe80
       DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
      CR0: 8005003b CR2: 00000000 CR3: 01f32000 CR4: 000007d0
      Stack:
       f5f18b90 f5e5feb8 c10687a8 0759004f 00000005 00000005 00000005 00200046
       00000002 00000000 c1082a93 f56c7e28 c2472da8 c1082a93 f5e5fee4 c106bc61^M
       00000000 c1082a93 00000000 00000000 00000001 00200046 00200082 00000000
      Call Trace:
       [<c10687a8>] ftrace_raw_event_lock+0x39/0xc0
       [<c1082a93>] ? ktime_get+0x29/0x69
       [<c1082a93>] ? ktime_get+0x29/0x69
       [<c106bc61>] lock_release+0x57/0x1a5
       [<c1082a93>] ? ktime_get+0x29/0x69
       [<c10824dd>] read_seqcount_begin.constprop.7+0x4d/0x75
       [<c1082a93>] ? ktime_get+0x29/0x69^M
       [<c1082a93>] ktime_get+0x29/0x69
       [<c108a46a>] __tick_nohz_idle_enter+0x1e/0x426
       [<c10690e8>] ? lock_release_holdtime.part.19+0x48/0x4d
       [<c10bc184>] ? time_hardirqs_off+0xe/0x28
       [<c1068c82>] ? trace_hardirqs_off_caller+0x3f/0xaf
       [<c108a8cb>] tick_nohz_idle_enter+0x59/0x62
       [<c1079242>] cpu_startup_entry+0x64/0x192
       [<c102299c>] start_secondary+0x277/0x27c
      Code: 90 89 c6 89 d0 88 c4 ac 38 e0 74 09 84 c0 75 f7 be 01 00 00 00 89 f0 48 5e 5d c3 55 89 e5 57 66 66 66 66 90 83 c9 ff 89 c7 31 c0 <f2> ae f7 d1 8d 41 ff 5f 5d c3 55 89 e5 57 66 66 66 66 90 31 ff
      EIP: [<c127a17b>] strlen+0x10/0x1a SS:ESP 0068:f5e5fe80
      CR2: 0000000000000000
      ---[ end trace 01bc47bf519ec1b2 ]---
      
      New tracepoints have been added that have allowed for NULL pointers
      being assigned to strings. To fix this, change the TRACE_EVENT() code
      to check for NULL and if it is, it will assign "(null)" to it instead
      (similar to what glibc printf does).
      Reported-by: NShuah Khan <shuah.kh@samsung.com>
      Reported-by: NJovi Zhangwei <jovi.zhangwei@gmail.com>
      Link: http://lkml.kernel.org/r/CAGdX0WFeEuy+DtpsJzyzn0343qEEjLX97+o1VREFkUEhndC+5Q@mail.gmail.com
      Link: http://lkml.kernel.org/r/528D6972.9010702@samsung.com
      Fixes: 9cbf1176 ("tracing/events: provide string with undefined size support")
      Cc: stable@vger.kernel.org # 2.6.31+
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      4e58e547
  17. 24 11月, 2013 1 次提交
    • K
      block: Abstract out bvec iterator · 4f024f37
      Kent Overstreet 提交于
      Immutable biovecs are going to require an explicit iterator. To
      implement immutable bvecs, a later patch is going to add a bi_bvec_done
      member to this struct; for now, this patch effectively just renames
      things.
      Signed-off-by: NKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Lars Ellenberg <drbd-dev@lists.linbit.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Yehuda Sadeh <yehuda@inktank.com>
      Cc: Sage Weil <sage@inktank.com>
      Cc: Alex Elder <elder@inktank.com>
      Cc: ceph-devel@vger.kernel.org
      Cc: Joshua Morris <josh.h.morris@us.ibm.com>
      Cc: Philip Kelleher <pjk1939@linux.vnet.ibm.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: dm-devel@redhat.com
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux390@de.ibm.com
      Cc: Boaz Harrosh <bharrosh@panasas.com>
      Cc: Benny Halevy <bhalevy@tonian.com>
      Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Chris Mason <chris.mason@fusionio.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: Jaegeuk Kim <jaegeuk.kim@samsung.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: Joern Engel <joern@logfs.org>
      Cc: Prasad Joshi <prasadjoshi.linux@gmail.com>
      Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
      Cc: KONISHI Ryusuke <konishi.ryusuke@lab.ntt.co.jp>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Ben Myers <bpm@sgi.com>
      Cc: xfs@oss.sgi.com
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Herton Ronaldo Krzesinski <herton.krzesinski@canonical.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Guo Chao <yan@linux.vnet.ibm.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Asai Thambi S P <asamymuthupa@micron.com>
      Cc: Selvan Mani <smani@micron.com>
      Cc: Sam Bradshaw <sbradshaw@micron.com>
      Cc: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
      Cc: "Roger Pau Monné" <roger.pau@citrix.com>
      Cc: Jan Beulich <jbeulich@suse.com>
      Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
      Cc: Ian Campbell <Ian.Campbell@citrix.com>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Cc: Jerome Marchand <jmarchand@redhat.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Peng Tao <tao.peng@emc.com>
      Cc: Andy Adamson <andros@netapp.com>
      Cc: fanchaoting <fanchaoting@cn.fujitsu.com>
      Cc: Jie Liu <jeff.liu@oracle.com>
      Cc: Sunil Mushran <sunil.mushran@gmail.com>
      Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
      Cc: Namjae Jeon <namjae.jeon@samsung.com>
      Cc: Pankaj Kumar <pankaj.km@samsung.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Mel Gorman <mgorman@suse.de>6
      4f024f37
  18. 21 11月, 2013 1 次提交
    • S
      btrfs: Use trace condition for get_extent tracepoint · 4cd8587c
      Steven Rostedt 提交于
      Doing an if statement to test some condition to know if we should
      trigger a tracepoint is pointless when tracing is disabled. This just
      adds overhead and wastes a branch prediction. This is why the
      TRACE_EVENT_CONDITION() was created. It places the check inside the jump
      label so that the branch does not happen unless tracing is enabled.
      
      That is, instead of doing:
      
      	if (em)
      		trace_btrfs_get_extent(root, em);
      
      Which is basically this:
      
      	if (em)
      		if (static_key(trace_btrfs_get_extent)) {
      
      Using a TRACE_EVENT_CONDITION() we can just do:
      
      	trace_btrfs_get_extent(root, em);
      
      And the condition trace event will do:
      
      	if (static_key(trace_btrfs_get_extent)) {
      		if (em) {
      			...
      
      The static key is a non conditional jump (or nop) that is faster than
      having to check if em is NULL or not.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      Signed-off-by: NChris Mason <chris.mason@fusionio.com>
      4cd8587c
  19. 19 11月, 2013 1 次提交
  20. 13 11月, 2013 2 次提交
    • K
      mm: get rid of unnecessary overhead of trace_mm_page_alloc_extfrag() · 52c8f6a5
      KOSAKI Motohiro 提交于
      In general, every tracepoint should be zero overhead if it is disabled.
      However, trace_mm_page_alloc_extfrag() is one of exception.  It evaluate
      "new_type == start_migratetype" even if tracepoint is disabled.
      
      However, the code can be moved into tracepoint's TP_fast_assign() and
      TP_fast_assign exist exactly such purpose.  This patch does it.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      52c8f6a5
    • J
      writeback: do not sync data dirtied after sync start · c4a391b5
      Jan Kara 提交于
      When there are processes heavily creating small files while sync(2) is
      running, it can easily happen that quite some new files are created
      between WB_SYNC_NONE and WB_SYNC_ALL pass of sync(2).  That can happen
      especially if there are several busy filesystems (remember that sync
      traverses filesystems sequentially and waits in WB_SYNC_ALL phase on one
      fs before starting it on another fs).  Because WB_SYNC_ALL pass is slow
      (e.g.  causes a transaction commit and cache flush for each inode in
      ext3), resulting sync(2) times are rather large.
      
      The following script reproduces the problem:
      
        function run_writers
        {
          for (( i = 0; i < 10; i++ )); do
            mkdir $1/dir$i
            for (( j = 0; j < 40000; j++ )); do
              dd if=/dev/zero of=$1/dir$i/$j bs=4k count=4 &>/dev/null
            done &
          done
        }
      
        for dir in "$@"; do
          run_writers $dir
        done
      
        sleep 40
        time sync
      
      Fix the problem by disregarding inodes dirtied after sync(2) was called
      in the WB_SYNC_ALL pass.  To allow for this, sync_inodes_sb() now takes
      a time stamp when sync has started which is used for setting up work for
      flusher threads.
      
      To give some numbers, when above script is run on two ext4 filesystems
      on simple SATA drive, the average sync time from 10 runs is 267.549
      seconds with standard deviation 104.799426.  With the patched kernel,
      the average sync time from 10 runs is 2.995 seconds with standard
      deviation 0.096.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NFengguang Wu <fengguang.wu@intel.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c4a391b5
  21. 11 11月, 2013 2 次提交
  22. 06 11月, 2013 1 次提交
    • T
      tracing: Update event filters for multibuffer · f306cc82
      Tom Zanussi 提交于
      The trace event filters are still tied to event calls rather than
      event files, which means you don't get what you'd expect when using
      filters in the multibuffer case:
      
      Before:
      
        # echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 8192
        # mkdir /sys/kernel/debug/tracing/instances/test1
        # echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 2048
        # cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        bytes_alloc > 2048
      
      Setting the filter in tracing/instances/test1/events shouldn't affect
      the same event in tracing/events as it does above.
      
      After:
      
        # echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 8192
        # mkdir /sys/kernel/debug/tracing/instances/test1
        # echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
        bytes_alloc > 8192
        # cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
        bytes_alloc > 2048
      
      We'd like to just move the filter directly from ftrace_event_call to
      ftrace_event_file, but there are a couple cases that don't yet have
      multibuffer support and therefore have to continue using the current
      event_call-based filters.  For those cases, a new USE_CALL_FILTER bit
      is added to the event_call flags, whose main purpose is to keep the
      old behavior for those cases until they can be updated with
      multibuffer support; at that point, the USE_CALL_FILTER flag (and the
      new associated call_filter_check_discard() function) can go away.
      
      The multibuffer support also made filter_current_check_discard()
      redundant, so this change removes that function as well and replaces
      it with filter_check_discard() (or call_filter_check_discard() as
      appropriate).
      
      Link: http://lkml.kernel.org/r/f16e9ce4270c62f46b2e966119225e1c3cca7e60.1382620672.git.tom.zanussi@linux.intel.comSigned-off-by: NTom Zanussi <tom.zanussi@linux.intel.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f306cc82
  23. 31 10月, 2013 1 次提交