1. 11 8月, 2010 16 次提交
  2. 10 8月, 2010 24 次提交
    • E
      flex_array: add helpers to get and put to make pointers easy to use · ea98eed9
      Eric Paris 提交于
      Getting and putting arrays of pointers with flex arrays is a PITA.  You
      have to remember to pass &ptr to the _put and you have to do weird and
      wacky casting to get the ptr back from the _get.  Add two functions
      flex_array_get_ptr() and flex_array_put_ptr() to handle all of the magic.
      
      [akpm@linux-foundation.org: simplification suggested by Joe]
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea98eed9
    • A
      iommu: inline iommu_num_pages · e269b085
      Anton Blanchard 提交于
      A profile of a network benchmark showed iommu_num_pages rather high up:
      
           0.52%  iommu_num_pages
      
      Looking at the profile, an integer divide is taking almost all of the time:
      
            %
               :      c000000000376ea4 <.iommu_num_pages>:
          1.93 :      c000000000376ea4:       fb e1 ff f8     std     r31,-8(r1)
          0.00 :      c000000000376ea8:       f8 21 ff c1     stdu    r1,-64(r1)
          0.00 :      c000000000376eac:       7c 3f 0b 78     mr      r31,r1
          3.86 :      c000000000376eb0:       38 84 ff ff     addi    r4,r4,-1
          0.00 :      c000000000376eb4:       38 05 ff ff     addi    r0,r5,-1
          0.00 :      c000000000376eb8:       7c 84 2a 14     add     r4,r4,r5
         46.95 :      c000000000376ebc:       7c 00 18 38     and     r0,r0,r3
         45.66 :      c000000000376ec0:       7c 84 02 14     add     r4,r4,r0
          0.00 :      c000000000376ec4:       7c 64 2b 92     divdu   r3,r4,r5
          0.00 :      c000000000376ec8:       38 3f 00 40     addi    r1,r31,64
          0.00 :      c000000000376ecc:       eb e1 ff f8     ld      r31,-8(r1)
          1.61 :      c000000000376ed0:       4e 80 00 20     blr
      
      Since every caller of iommu_num_pages passes in a constant power of two
      we can inline this such that the divide is replaced by a shift. The
      entire function is only a few instructions once optimised, so it is
      a good candidate for inlining overall.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e269b085
    • J
      kernel.h: remove unused NIPQUAD and NIPQUAD_FMT · cf4ca487
      Joe Perches 提交于
      There are no more uses of NIPQUAD or NIPQUAD_FMT.  Remove the definitions.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf4ca487
    • R
      include/linux/compiler-gcc.h: use __same_type() in __must_be_array() · ea6b101d
      Rusty Russell 提交于
      We should use the __same_type() helper in __must_be_array().
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Reported-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea6b101d
    • M
      asm-generic/io.h: add big endian versions of io{read,write}{16,32} · 7387be33
      Mike Frysinger 提交于
      The asm-generic/iomap.h provides these functions already, but the
      non-generic fallback defines do not.
      Signed-off-by: NMike Frysinger <vapier@gentoo.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7387be33
    • A
      cpuidle: extend cpuidle and menu governor to handle dynamic states · 71abbbf8
      Ai Li 提交于
      On some SoC chips, HW resources may be in use during any particular idle
      period.  As a consequence, the cpuidle states that the SoC is safe to
      enter can change from idle period to idle period.  In addition, the
      latency and threshold of each cpuidle state can vary, depending on the
      operating condition when the CPU becomes idle, e.g.  the current cpu
      frequency, the current state of the HW blocks, etc.
      
      cpuidle core and the menu governor, in the current form, are geared
      towards cpuidle states that are static, i.e.  the availabiltiy of the
      states, their latencies, their thresholds are non-changing during run
      time.  cpuidle does not provide any hook that cpuidle drivers can use to
      adjust those values on the fly for the current idle period before the menu
      governor selects the target cpuidle state.
      
      This patch extends cpuidle core and the menu governor to handle states
      that are dynamic.  There are three additions in the patch and the patch
      maintains backwards-compatibility with existing cpuidle drivers.
      
      1) add prepare() to struct cpuidle_device.  A cpuidle driver can hook
         into the callback and cpuidle will call prepare() before calling the
         governor's select function.  The callback gives the cpuidle driver a
         chance to update the dynamic information of the cpuidle states for the
         current idle period, e.g.  state availability, latencies, thresholds,
         power values, etc.
      
      2) add CPUIDLE_FLAG_IGNORE as one of the state flags.  In the prepare()
         function, a cpuidle driver can set/clear the flag to indicate to the
         menu governor whether a cpuidle state should be ignored, i.e.  not
         available, during the current idle period.
      
      3) add power_specified bit to struct cpuidle_device.  The menu governor
         currently assumes that the cpuidle states are arranged in the order of
         increasing latency, threshold, and power savings.  This is true or can
         be made true for static states.  Once the state parameters are dynamic,
         the latencies, thresholds, and power savings for the cpuidle states can
         increase or decrease by different amounts from idle period to idle
         period.  So the assumption of increasing latency, threshold, and power
         savings from Cn to C(n+1) can no longer be guaranteed.
      
      It can be straightforward to calculate the power consumption of each
      available state and to specify it in power_usage for the idle period.
      Using the power_usage fields, the menu governor then selects the state
      that has the lowest power consumption and that still satisfies all other
      critieria.  The power_specified bit defaults to 0.  For existing cpuidle
      drivers, cpuidle detects that power_specified is 0 and fills in a dummy
      set of power_usage values.
      Signed-off-by: NAi Li <aili@codeaurora.org>
      Cc: Len Brown <len.brown@intel.com>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71abbbf8
    • K
      hibernation: freeze swap at hibernation · d2997b10
      KAMEZAWA Hiroyuki 提交于
      When taking a memory snapshot in hibernate_snapshot(), all (directly
      called) memory allocations use GFP_ATOMIC.  Hence swap misusage during
      hibernation never occurs.
      
      But from a pessimistic point of view, there is no guarantee that no page
      allcation has __GFP_WAIT.  It is better to have a global indication "we
      enter hibernation, don't use swap!".
      
      This patch tries to freeze new-swap-allocation during hibernation.  (All
      user processes are frozenm so swapin is not a concern).
      
      This way, no updates will happen to swap_map[] between
      hibernate_snapshot() and save_image().  Swap is thawed when swsusp_free()
      is called.  We can be assured that swap corruption will not occur.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Ondrej Zary <linux@rainbow-software.org>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d2997b10
    • K
      memcg: add mm_vmscan_memcg_isolate tracepoint · cc8e970c
      KOSAKI Motohiro 提交于
      Memcg also need to trace page isolation information as global reclaim.
      This patch does it.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cc8e970c
    • K
      vmscan: convert mm_vmscan_lru_isolate to DEFINE_EVENT · e17613c3
      KOSAKI Motohiro 提交于
      Mel Gorman recently added some vmscan tracepoints.  Unfortunately they are
      covered only global reclaim.  But we want to trace memcg reclaim too.
      
      Thus, this patch convert them to DEFINE_TRACE macro.  it help to reuse
      tracepoint definition for other similar usage (i.e.  memcg).  This patch
      have no functionally change.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e17613c3
    • K
      memcg, vmscan: add memcg reclaim tracepoint · bdce6d9e
      KOSAKI Motohiro 提交于
      Memcg also need to trace reclaim progress as direct reclaim.  This patch
      add it.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bdce6d9e
    • K
      vmscan: convert direct reclaim tracepoint to DEFINE_TRACE · cf4dcc3e
      KOSAKI Motohiro 提交于
      Mel Gorman recently added some vmscan tracepoints.  Unfortunately they are
      covered only global reclaim.  But we want to trace memcg reclaim too.
      
      Thus, this patch convert them to DEFINE_TRACE macro.  it help to reuse
      tracepoint definition for other similar usage (i.e.  memcg).  This patch
      have no functionally change.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf4dcc3e
    • R
      rmap: add exclusive page to private anon_vma on swapin · ad8c2ee8
      Rik van Riel 提交于
      On swapin it is fairly common for a page to be owned exclusively by one
      process.  In that case we want to add the page to the anon_vma of that
      process's VMA, instead of to the root anon_vma.
      
      This will reduce the amount of rmap searching that the swapout code needs
      to do.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ad8c2ee8
    • D
      oom: deprecate oom_adj tunable · 51b1bd2a
      David Rientjes 提交于
      /proc/pid/oom_adj is now deprecated so that that it may eventually be
      removed.  The target date for removal is August 2012.
      
      A warning will be printed to the kernel log if a task attempts to use this
      interface.  Future warning will be suppressed until the kernel is rebooted
      to prevent spamming the kernel log.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      51b1bd2a
    • D
      oom: badness heuristic rewrite · a63d83f4
      David Rientjes 提交于
      This a complete rewrite of the oom killer's badness() heuristic which is
      used to determine which task to kill in oom conditions.  The goal is to
      make it as simple and predictable as possible so the results are better
      understood and we end up killing the task which will lead to the most
      memory freeing while still respecting the fine-tuning from userspace.
      
      Instead of basing the heuristic on mm->total_vm for each task, the task's
      rss and swap space is used instead.  This is a better indication of the
      amount of memory that will be freeable if the oom killed task is chosen
      and subsequently exits.  This helps specifically in cases where KDE or
      GNOME is chosen for oom kill on desktop systems instead of a memory
      hogging task.
      
      The baseline for the heuristic is a proportion of memory that each task is
      currently using in memory plus swap compared to the amount of "allowable"
      memory.  "Allowable," in this sense, means the system-wide resources for
      unconstrained oom conditions, the set of mempolicy nodes, the mems
      attached to current's cpuset, or a memory controller's limit.  The
      proportion is given on a scale of 0 (never kill) to 1000 (always kill),
      roughly meaning that if a task has a badness() score of 500 that the task
      consumes approximately 50% of allowable memory resident in RAM or in swap
      space.
      
      The proportion is always relative to the amount of "allowable" memory and
      not the total amount of RAM systemwide so that mempolicies and cpusets may
      operate in isolation; they shall not need to know the true size of the
      machine on which they are running if they are bound to a specific set of
      nodes or mems, respectively.
      
      Root tasks are given 3% extra memory just like __vm_enough_memory()
      provides in LSMs.  In the event of two tasks consuming similar amounts of
      memory, it is generally better to save root's task.
      
      Because of the change in the badness() heuristic's baseline, it is also
      necessary to introduce a new user interface to tune it.  It's not possible
      to redefine the meaning of /proc/pid/oom_adj with a new scale since the
      ABI cannot be changed for backward compatability.  Instead, a new tunable,
      /proc/pid/oom_score_adj, is added that ranges from -1000 to +1000.  It may
      be used to polarize the heuristic such that certain tasks are never
      considered for oom kill while others may always be considered.  The value
      is added directly into the badness() score so a value of -500, for
      example, means to discount 50% of its memory consumption in comparison to
      other tasks either on the system, bound to the mempolicy, in the cpuset,
      or sharing the same memory controller.
      
      /proc/pid/oom_adj is changed so that its meaning is rescaled into the
      units used by /proc/pid/oom_score_adj, and vice versa.  Changing one of
      these per-task tunables will rescale the value of the other to an
      equivalent meaning.  Although /proc/pid/oom_adj was originally defined as
      a bitshift on the badness score, it now shares the same linear growth as
      /proc/pid/oom_score_adj but with different granularity.  This is required
      so the ABI is not broken with userspace applications and allows oom_adj to
      be deprecated for future removal.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a63d83f4
    • A
      oom: move badness() declaration into oom.h · 74bcbf40
      Andrew Morton 提交于
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      74bcbf40
    • K
      vmscan: kill prev_priority completely · 25edde03
      KOSAKI Motohiro 提交于
      Since 2.6.28 zone->prev_priority is unused. Then it can be removed
      safely. It reduce stack usage slightly.
      
      Now I have to say that I'm sorry. 2 years ago, I thought prev_priority
      can be integrate again, it's useful. but four (or more) times trying
      haven't got good performance number. Thus I give up such approach.
      
      The rest of this changelog is notes on prev_priority and why it existed in
      the first place and why it might be not necessary any more. This information
      is based heavily on discussions between Andrew Morton, Rik van Riel and
      Kosaki Motohiro who is heavily quotes from.
      
      Historically prev_priority was important because it determined when the VM
      would start unmapping PTE pages. i.e. there are no balances of note within
      the VM, Anon vs File and Mapped vs Unmapped. Without prev_priority, there
      is a potential risk of unnecessarily increasing minor faults as a large
      amount of read activity of use-once pages could push mapped pages to the
      end of the LRU and get unmapped.
      
      There is no proof this is still a problem but currently it is not considered
      to be. Active files are not deactivated if the active file list is smaller
      than the inactive list reducing the liklihood that file-mapped pages are
      being pushed off the LRU and referenced executable pages are kept on the
      active list to avoid them getting pushed out by read activity.
      
      Even if it is a problem, prev_priority prev_priority wouldn't works
      nowadays. First of all, current vmscan still a lot of UP centric code. it
      expose some weakness on some dozens CPUs machine. I think we need more and
      more improvement.
      
      The problem is, current vmscan mix up per-system-pressure, per-zone-pressure
      and per-task-pressure a bit. example, prev_priority try to boost priority to
      other concurrent priority. but if the another task have mempolicy restriction,
      it is unnecessary, but also makes wrong big latency and exceeding reclaim.
      per-task based priority + prev_priority adjustment make the emulation of
      per-system pressure. but it have two issue 1) too rough and brutal emulation
      2) we need per-zone pressure, not per-system.
      
      Another example, currently DEF_PRIORITY is 12. it mean the lru rotate about
      2 cycle (1/4096 + 1/2048 + 1/1024 + .. + 1) before invoking OOM-Killer.
      but if 10,0000 thrreads enter DEF_PRIORITY reclaim at the same time, the
      system have higher memory pressure than priority==0 (1/4096*10,000 > 2).
      prev_priority can't solve such multithreads workload issue. In other word,
      prev_priority concept assume the sysmtem don't have lots threads."
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      25edde03
    • M
      vmscan: tracing: add trace event when a page is written · 755f0225
      Mel Gorman 提交于
      Add a trace event for when page reclaim queues a page for IO and records
      whether it is synchronous or asynchronous.  Excessive synchronous IO for a
      process can result in noticeable stalls during direct reclaim.  Excessive
      IO from page reclaim may indicate that the system is seriously under
      provisioned for the amount of dirty pages that exist.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NLarry Woodman <lwoodman@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      755f0225
    • M
      vmscan: tracing: add trace events for LRU page isolation · a8a94d15
      Mel Gorman 提交于
      Add an event for when pages are isolated en-masse from the LRU lists.
      This event augments the information available on LRU traffic and can be
      used to evaluate lumpy reclaim.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NLarry Woodman <lwoodman@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a8a94d15
    • M
      vmscan: tracing: add trace events for kswapd wakeup, sleeping and direct reclaim · 33906bc5
      Mel Gorman 提交于
      Add two trace events for kswapd waking up and going asleep for the
      purposes of tracking kswapd activity and two trace events for direct
      reclaim beginning and ending.  The information can be used to work out how
      much time a process or the system is spending on the reclamation of pages
      and in the case of direct reclaim, how many pages were reclaimed for that
      process.  High frequency triggering of these events could point to memory
      pressure problems.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NLarry Woodman <lwoodman@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33906bc5
    • J
      mm: implement writeback livelock avoidance using page tagging · f446daae
      Jan Kara 提交于
      We try to avoid livelocks of writeback when some steadily creates dirty
      pages in a mapping we are writing out.  For memory-cleaning writeback,
      using nr_to_write works reasonably well but we cannot really use it for
      data integrity writeback.  This patch tries to solve the problem.
      
      The idea is simple: Tag all pages that should be written back with a
      special tag (TOWRITE) in the radix tree.  This can be done rather quickly
      and thus livelocks should not happen in practice.  Then we start doing the
      hard work of locking pages and sending them to disk only for those pages
      that have TOWRITE tag set.
      
      Note: Adding new radix tree tag grows radix tree node from 288 to 296
      bytes for 32-bit archs and from 552 to 560 bytes for 64-bit archs.
      However, the number of slab/slub items per page remains the same (13 and 7
      respectively).
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f446daae
    • J
      radix-tree: omplement function radix_tree_range_tag_if_tagged · ebf8aa44
      Jan Kara 提交于
      Implement function for setting one tag if another tag is set for each item
      in given range.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ebf8aa44
    • A
      ksm: fix ksm swapin time optimization · ba6f0ff3
      Andrea Arcangeli 提交于
      The new anon-vma code, was suboptimal and it lead to erratic invocation of
      ksm_does_need_to_copy.  That leads to host hangs or guest vnc lockup, or
      weird behavior.  It's unclear why ksm_does_need_to_copy is unstable but
      the point is that when KSM is not in use, ksm_does_need_to_copy must never
      run or we bounce pages for no good reason.  I suspect the same hangs will
      happen with KVM swaps.  But this at least fixes the regression in the
      new-anon-vma code and it only let KSM bugs triggers when KSM is in use.
      
      The code in do_swap_page likely doesn't cope well with a not-swapcache,
      especially the memcg code.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: Izik Eidus <ieidus@yahoo.com>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba6f0ff3
    • T
      tmpfs: make tmpfs scalable with percpu_counter for used blocks · 7e496299
      Tim Chen 提交于
      The current implementation of tmpfs is not scalable.  We found that
      stat_lock is contended by multiple threads when we need to get a new page,
      leading to useless spinning inside this spin lock.
      
      This patch makes use of the percpu_counter library to maintain local count
      of used blocks to speed up getting and returning of pages.  So the
      acquisition of stat_lock is unnecessary for getting and returning blocks,
      improving the performance of tmpfs on system with large number of cpus.
      On a 4 socket 32 core NHM-EX system, we saw improvement of 270%.
      
      The implementation below has a slight chance of race between threads
      causing a slight overshoot of the maximum configured blocks.  However, any
      overshoot is small, and is bounded by the number of cpus.  This happens
      when the number of used blocks is slightly below the maximum configured
      blocks when a thread checks the used block count, and another thread
      allocates the last block before the current thread does.  This should not
      be a problem for tmpfs, as the overshoot is most likely to be a few blocks
      and bounded.  If a strict limit is really desired, then configured the max
      blocks to be the limit less the number of cpus in system.
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e496299
    • T
      tmpfs: add accurate compare function to percpu_counter library · 27f5e0f6
      Tim Chen 提交于
      Add percpu_counter_compare that allows for a quick but accurate comparison
      of percpu_counter with a given value.
      
      A rough count is provided by the count field in percpu_counter structure,
      without accounting for the other values stored in individual cpu counters.
      
      The actual count is a sum of count and the cpu counters.  However, count
      field is never different from the actual value by a factor of
      batch*num_online_cpu.  We do not need to get actual count for comparison
      if count is different from the given value by this factor and allows for
      quick comparison without summing up all the per cpu counters.
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      27f5e0f6