1. 11 8月, 2010 12 次提交
  2. 10 8月, 2010 28 次提交
    • E
      flex_array: add helpers to get and put to make pointers easy to use · ea98eed9
      Eric Paris 提交于
      Getting and putting arrays of pointers with flex arrays is a PITA.  You
      have to remember to pass &ptr to the _put and you have to do weird and
      wacky casting to get the ptr back from the _get.  Add two functions
      flex_array_get_ptr() and flex_array_put_ptr() to handle all of the magic.
      
      [akpm@linux-foundation.org: simplification suggested by Joe]
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea98eed9
    • A
      iommu: inline iommu_num_pages · e269b085
      Anton Blanchard 提交于
      A profile of a network benchmark showed iommu_num_pages rather high up:
      
           0.52%  iommu_num_pages
      
      Looking at the profile, an integer divide is taking almost all of the time:
      
            %
               :      c000000000376ea4 <.iommu_num_pages>:
          1.93 :      c000000000376ea4:       fb e1 ff f8     std     r31,-8(r1)
          0.00 :      c000000000376ea8:       f8 21 ff c1     stdu    r1,-64(r1)
          0.00 :      c000000000376eac:       7c 3f 0b 78     mr      r31,r1
          3.86 :      c000000000376eb0:       38 84 ff ff     addi    r4,r4,-1
          0.00 :      c000000000376eb4:       38 05 ff ff     addi    r0,r5,-1
          0.00 :      c000000000376eb8:       7c 84 2a 14     add     r4,r4,r5
         46.95 :      c000000000376ebc:       7c 00 18 38     and     r0,r0,r3
         45.66 :      c000000000376ec0:       7c 84 02 14     add     r4,r4,r0
          0.00 :      c000000000376ec4:       7c 64 2b 92     divdu   r3,r4,r5
          0.00 :      c000000000376ec8:       38 3f 00 40     addi    r1,r31,64
          0.00 :      c000000000376ecc:       eb e1 ff f8     ld      r31,-8(r1)
          1.61 :      c000000000376ed0:       4e 80 00 20     blr
      
      Since every caller of iommu_num_pages passes in a constant power of two
      we can inline this such that the divide is replaced by a shift. The
      entire function is only a few instructions once optimised, so it is
      a good candidate for inlining overall.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Cc: Akinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e269b085
    • J
      kernel.h: remove unused NIPQUAD and NIPQUAD_FMT · cf4ca487
      Joe Perches 提交于
      There are no more uses of NIPQUAD or NIPQUAD_FMT.  Remove the definitions.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf4ca487
    • R
      include/linux/compiler-gcc.h: use __same_type() in __must_be_array() · ea6b101d
      Rusty Russell 提交于
      We should use the __same_type() helper in __must_be_array().
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Reported-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea6b101d
    • A
      cpuidle: extend cpuidle and menu governor to handle dynamic states · 71abbbf8
      Ai Li 提交于
      On some SoC chips, HW resources may be in use during any particular idle
      period.  As a consequence, the cpuidle states that the SoC is safe to
      enter can change from idle period to idle period.  In addition, the
      latency and threshold of each cpuidle state can vary, depending on the
      operating condition when the CPU becomes idle, e.g.  the current cpu
      frequency, the current state of the HW blocks, etc.
      
      cpuidle core and the menu governor, in the current form, are geared
      towards cpuidle states that are static, i.e.  the availabiltiy of the
      states, their latencies, their thresholds are non-changing during run
      time.  cpuidle does not provide any hook that cpuidle drivers can use to
      adjust those values on the fly for the current idle period before the menu
      governor selects the target cpuidle state.
      
      This patch extends cpuidle core and the menu governor to handle states
      that are dynamic.  There are three additions in the patch and the patch
      maintains backwards-compatibility with existing cpuidle drivers.
      
      1) add prepare() to struct cpuidle_device.  A cpuidle driver can hook
         into the callback and cpuidle will call prepare() before calling the
         governor's select function.  The callback gives the cpuidle driver a
         chance to update the dynamic information of the cpuidle states for the
         current idle period, e.g.  state availability, latencies, thresholds,
         power values, etc.
      
      2) add CPUIDLE_FLAG_IGNORE as one of the state flags.  In the prepare()
         function, a cpuidle driver can set/clear the flag to indicate to the
         menu governor whether a cpuidle state should be ignored, i.e.  not
         available, during the current idle period.
      
      3) add power_specified bit to struct cpuidle_device.  The menu governor
         currently assumes that the cpuidle states are arranged in the order of
         increasing latency, threshold, and power savings.  This is true or can
         be made true for static states.  Once the state parameters are dynamic,
         the latencies, thresholds, and power savings for the cpuidle states can
         increase or decrease by different amounts from idle period to idle
         period.  So the assumption of increasing latency, threshold, and power
         savings from Cn to C(n+1) can no longer be guaranteed.
      
      It can be straightforward to calculate the power consumption of each
      available state and to specify it in power_usage for the idle period.
      Using the power_usage fields, the menu governor then selects the state
      that has the lowest power consumption and that still satisfies all other
      critieria.  The power_specified bit defaults to 0.  For existing cpuidle
      drivers, cpuidle detects that power_specified is 0 and fills in a dummy
      set of power_usage values.
      Signed-off-by: NAi Li <aili@codeaurora.org>
      Cc: Len Brown <len.brown@intel.com>
      Acked-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71abbbf8
    • K
      hibernation: freeze swap at hibernation · d2997b10
      KAMEZAWA Hiroyuki 提交于
      When taking a memory snapshot in hibernate_snapshot(), all (directly
      called) memory allocations use GFP_ATOMIC.  Hence swap misusage during
      hibernation never occurs.
      
      But from a pessimistic point of view, there is no guarantee that no page
      allcation has __GFP_WAIT.  It is better to have a global indication "we
      enter hibernation, don't use swap!".
      
      This patch tries to freeze new-swap-allocation during hibernation.  (All
      user processes are frozenm so swapin is not a concern).
      
      This way, no updates will happen to swap_map[] between
      hibernate_snapshot() and save_image().  Swap is thawed when swsusp_free()
      is called.  We can be assured that swap corruption will not occur.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Ondrej Zary <linux@rainbow-software.org>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d2997b10
    • R
      rmap: add exclusive page to private anon_vma on swapin · ad8c2ee8
      Rik van Riel 提交于
      On swapin it is fairly common for a page to be owned exclusively by one
      process.  In that case we want to add the page to the anon_vma of that
      process's VMA, instead of to the root anon_vma.
      
      This will reduce the amount of rmap searching that the swapout code needs
      to do.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ad8c2ee8
    • D
      oom: deprecate oom_adj tunable · 51b1bd2a
      David Rientjes 提交于
      /proc/pid/oom_adj is now deprecated so that that it may eventually be
      removed.  The target date for removal is August 2012.
      
      A warning will be printed to the kernel log if a task attempts to use this
      interface.  Future warning will be suppressed until the kernel is rebooted
      to prevent spamming the kernel log.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      51b1bd2a
    • D
      oom: badness heuristic rewrite · a63d83f4
      David Rientjes 提交于
      This a complete rewrite of the oom killer's badness() heuristic which is
      used to determine which task to kill in oom conditions.  The goal is to
      make it as simple and predictable as possible so the results are better
      understood and we end up killing the task which will lead to the most
      memory freeing while still respecting the fine-tuning from userspace.
      
      Instead of basing the heuristic on mm->total_vm for each task, the task's
      rss and swap space is used instead.  This is a better indication of the
      amount of memory that will be freeable if the oom killed task is chosen
      and subsequently exits.  This helps specifically in cases where KDE or
      GNOME is chosen for oom kill on desktop systems instead of a memory
      hogging task.
      
      The baseline for the heuristic is a proportion of memory that each task is
      currently using in memory plus swap compared to the amount of "allowable"
      memory.  "Allowable," in this sense, means the system-wide resources for
      unconstrained oom conditions, the set of mempolicy nodes, the mems
      attached to current's cpuset, or a memory controller's limit.  The
      proportion is given on a scale of 0 (never kill) to 1000 (always kill),
      roughly meaning that if a task has a badness() score of 500 that the task
      consumes approximately 50% of allowable memory resident in RAM or in swap
      space.
      
      The proportion is always relative to the amount of "allowable" memory and
      not the total amount of RAM systemwide so that mempolicies and cpusets may
      operate in isolation; they shall not need to know the true size of the
      machine on which they are running if they are bound to a specific set of
      nodes or mems, respectively.
      
      Root tasks are given 3% extra memory just like __vm_enough_memory()
      provides in LSMs.  In the event of two tasks consuming similar amounts of
      memory, it is generally better to save root's task.
      
      Because of the change in the badness() heuristic's baseline, it is also
      necessary to introduce a new user interface to tune it.  It's not possible
      to redefine the meaning of /proc/pid/oom_adj with a new scale since the
      ABI cannot be changed for backward compatability.  Instead, a new tunable,
      /proc/pid/oom_score_adj, is added that ranges from -1000 to +1000.  It may
      be used to polarize the heuristic such that certain tasks are never
      considered for oom kill while others may always be considered.  The value
      is added directly into the badness() score so a value of -500, for
      example, means to discount 50% of its memory consumption in comparison to
      other tasks either on the system, bound to the mempolicy, in the cpuset,
      or sharing the same memory controller.
      
      /proc/pid/oom_adj is changed so that its meaning is rescaled into the
      units used by /proc/pid/oom_score_adj, and vice versa.  Changing one of
      these per-task tunables will rescale the value of the other to an
      equivalent meaning.  Although /proc/pid/oom_adj was originally defined as
      a bitshift on the badness score, it now shares the same linear growth as
      /proc/pid/oom_score_adj but with different granularity.  This is required
      so the ABI is not broken with userspace applications and allows oom_adj to
      be deprecated for future removal.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a63d83f4
    • A
      oom: move badness() declaration into oom.h · 74bcbf40
      Andrew Morton 提交于
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      74bcbf40
    • K
      vmscan: kill prev_priority completely · 25edde03
      KOSAKI Motohiro 提交于
      Since 2.6.28 zone->prev_priority is unused. Then it can be removed
      safely. It reduce stack usage slightly.
      
      Now I have to say that I'm sorry. 2 years ago, I thought prev_priority
      can be integrate again, it's useful. but four (or more) times trying
      haven't got good performance number. Thus I give up such approach.
      
      The rest of this changelog is notes on prev_priority and why it existed in
      the first place and why it might be not necessary any more. This information
      is based heavily on discussions between Andrew Morton, Rik van Riel and
      Kosaki Motohiro who is heavily quotes from.
      
      Historically prev_priority was important because it determined when the VM
      would start unmapping PTE pages. i.e. there are no balances of note within
      the VM, Anon vs File and Mapped vs Unmapped. Without prev_priority, there
      is a potential risk of unnecessarily increasing minor faults as a large
      amount of read activity of use-once pages could push mapped pages to the
      end of the LRU and get unmapped.
      
      There is no proof this is still a problem but currently it is not considered
      to be. Active files are not deactivated if the active file list is smaller
      than the inactive list reducing the liklihood that file-mapped pages are
      being pushed off the LRU and referenced executable pages are kept on the
      active list to avoid them getting pushed out by read activity.
      
      Even if it is a problem, prev_priority prev_priority wouldn't works
      nowadays. First of all, current vmscan still a lot of UP centric code. it
      expose some weakness on some dozens CPUs machine. I think we need more and
      more improvement.
      
      The problem is, current vmscan mix up per-system-pressure, per-zone-pressure
      and per-task-pressure a bit. example, prev_priority try to boost priority to
      other concurrent priority. but if the another task have mempolicy restriction,
      it is unnecessary, but also makes wrong big latency and exceeding reclaim.
      per-task based priority + prev_priority adjustment make the emulation of
      per-system pressure. but it have two issue 1) too rough and brutal emulation
      2) we need per-zone pressure, not per-system.
      
      Another example, currently DEF_PRIORITY is 12. it mean the lru rotate about
      2 cycle (1/4096 + 1/2048 + 1/1024 + .. + 1) before invoking OOM-Killer.
      but if 10,0000 thrreads enter DEF_PRIORITY reclaim at the same time, the
      system have higher memory pressure than priority==0 (1/4096*10,000 > 2).
      prev_priority can't solve such multithreads workload issue. In other word,
      prev_priority concept assume the sysmtem don't have lots threads."
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Rubin <mrubin@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      25edde03
    • J
      mm: implement writeback livelock avoidance using page tagging · f446daae
      Jan Kara 提交于
      We try to avoid livelocks of writeback when some steadily creates dirty
      pages in a mapping we are writing out.  For memory-cleaning writeback,
      using nr_to_write works reasonably well but we cannot really use it for
      data integrity writeback.  This patch tries to solve the problem.
      
      The idea is simple: Tag all pages that should be written back with a
      special tag (TOWRITE) in the radix tree.  This can be done rather quickly
      and thus livelocks should not happen in practice.  Then we start doing the
      hard work of locking pages and sending them to disk only for those pages
      that have TOWRITE tag set.
      
      Note: Adding new radix tree tag grows radix tree node from 288 to 296
      bytes for 32-bit archs and from 552 to 560 bytes for 64-bit archs.
      However, the number of slab/slub items per page remains the same (13 and 7
      respectively).
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f446daae
    • J
      radix-tree: omplement function radix_tree_range_tag_if_tagged · ebf8aa44
      Jan Kara 提交于
      Implement function for setting one tag if another tag is set for each item
      in given range.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ebf8aa44
    • A
      ksm: fix ksm swapin time optimization · ba6f0ff3
      Andrea Arcangeli 提交于
      The new anon-vma code, was suboptimal and it lead to erratic invocation of
      ksm_does_need_to_copy.  That leads to host hangs or guest vnc lockup, or
      weird behavior.  It's unclear why ksm_does_need_to_copy is unstable but
      the point is that when KSM is not in use, ksm_does_need_to_copy must never
      run or we bounce pages for no good reason.  I suspect the same hangs will
      happen with KVM swaps.  But this at least fixes the regression in the
      new-anon-vma code and it only let KSM bugs triggers when KSM is in use.
      
      The code in do_swap_page likely doesn't cope well with a not-swapcache,
      especially the memcg code.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Cc: Izik Eidus <ieidus@yahoo.com>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba6f0ff3
    • T
      tmpfs: make tmpfs scalable with percpu_counter for used blocks · 7e496299
      Tim Chen 提交于
      The current implementation of tmpfs is not scalable.  We found that
      stat_lock is contended by multiple threads when we need to get a new page,
      leading to useless spinning inside this spin lock.
      
      This patch makes use of the percpu_counter library to maintain local count
      of used blocks to speed up getting and returning of pages.  So the
      acquisition of stat_lock is unnecessary for getting and returning blocks,
      improving the performance of tmpfs on system with large number of cpus.
      On a 4 socket 32 core NHM-EX system, we saw improvement of 270%.
      
      The implementation below has a slight chance of race between threads
      causing a slight overshoot of the maximum configured blocks.  However, any
      overshoot is small, and is bounded by the number of cpus.  This happens
      when the number of used blocks is slightly below the maximum configured
      blocks when a thread checks the used block count, and another thread
      allocates the last block before the current thread does.  This should not
      be a problem for tmpfs, as the overshoot is most likely to be a few blocks
      and bounded.  If a strict limit is really desired, then configured the max
      blocks to be the limit less the number of cpus in system.
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e496299
    • T
      tmpfs: add accurate compare function to percpu_counter library · 27f5e0f6
      Tim Chen 提交于
      Add percpu_counter_compare that allows for a quick but accurate comparison
      of percpu_counter with a given value.
      
      A rough count is provided by the count field in percpu_counter structure,
      without accounting for the other values stored in individual cpu counters.
      
      The actual count is a sum of count and the cpu counters.  However, count
      field is never different from the actual value by a factor of
      batch*num_online_cpu.  We do not need to get actual count for comparison
      if count is different from the given value by this factor and allows for
      quick comparison without summing up all the per cpu counters.
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      27f5e0f6
    • A
      gcc-4.6: mm: fix unused but set warnings · 4e60c86b
      Andi Kleen 提交于
      No real bugs, just some dead code and some fixups.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e60c86b
    • A
      gcc-4.6: pagemap: avoid unused-but-set variable · 627295e4
      Andi Kleen 提交于
      Avoid quite a lot of warnings in header files in a gcc 4.6 -Wall builds
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      627295e4
    • L
      topology: alternate fix for ia64 tiger_defconfig build breakage · 25106000
      Lee Schermerhorn 提交于
      Define stubs for the numa_*_id() generic percpu related functions for
      non-NUMA configurations in <asm-generic/topology.h> where the other
      non-numa stubs live.
      
      Fixes ia64 !NUMA build breakage -- e.g., tiger_defconfig
      
      Back out now unneeded '#ifndef CONFIG_NUMA' guards from ia64 smpboot.c
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Tested-by: NTony Luck <tony.luck@intel.com>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      25106000
    • A
      mmzone.h: remove dead prototype · b645bd12
      Alexander Nevenchannyy 提交于
      get_zone_counts() was dropped from kernel tree, see:
      http://www.mail-archive.com/mm-commits@vger.kernel.org/msg07313.html but
      its prototype remains.
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b645bd12
    • M
      mm: rename try_set_zone_oom() to try_set_zonelist_oom() · ff321fea
      Minchan Kim 提交于
      We have been used naming try_set_zone_oom and clear_zonelist_oom.
      The role of functions is to lock of zonelist for preventing parallel
      OOM. So clear_zonelist_oom makes sense but try_set_zone_oome is rather
      awkward and unmatched with clear_zonelist_oom.
      
      Let's change it with try_set_zonelist_oom.
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ff321fea
    • D
      oom: move sysctl declarations to oom.h · 8e4228e1
      David Rientjes 提交于
      The three oom killer sysctl variables (sysctl_oom_dump_tasks,
      sysctl_oom_kill_allocating_task, and sysctl_panic_on_oom) are better
      declared in include/linux/oom.h rather than kernel/sysctl.c.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8e4228e1
    • D
      oom: extract panic helper function · 309ed882
      David Rientjes 提交于
      There are various points in the oom killer where the kernel must determine
      whether to panic or not.  It's better to extract this to a helper function
      to remove all the confusion as to its semantics.
      
      Also fix a call to dump_header() where tasklist_lock is not read- locked,
      as required.
      
      There's no functional change with this patch.
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      309ed882
    • D
      oom: select task from tasklist for mempolicy ooms · 6f48d0eb
      David Rientjes 提交于
      The oom killer presently kills current whenever there is no more memory
      free or reclaimable on its mempolicy's nodes.  There is no guarantee that
      current is a memory-hogging task or that killing it will free any
      substantial amount of memory, however.
      
      In such situations, it is better to scan the tasklist for nodes that are
      allowed to allocate on current's set of nodes and kill the task with the
      highest badness() score.  This ensures that the most memory-hogging task,
      or the one configured by the user with /proc/pid/oom_adj, is always
      selected in such scenarios.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6f48d0eb
    • R
      buffer_head: remove redundant test from wait_on_buffer · a9877cc2
      Richard Kennedy 提交于
      The comment suggests that when b_count equals zero it is calling
      __wait_no_buffer to trigger some debug, but as there is no debug in
      __wait_on_buffer the whole thing is redundant.
      
      AFAICT from the git log this has been the case for at least 5 years, so
      it seems safe just to remove this.
      Signed-off-by: NRichard Kennedy <richard@rsk.demon.co.uk>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a9877cc2
    • R
      mm: extend KSM refcounts to the anon_vma root · 76545066
      Rik van Riel 提交于
      KSM reference counts can cause an anon_vma to exist after the processe it
      belongs to have already exited.  Because the anon_vma lock now lives in
      the root anon_vma, we need to ensure that the root anon_vma stays around
      until after all the "child" anon_vmas have been freed.
      
      The obvious way to do this is to have a "child" anon_vma take a reference
      to the root in anon_vma_fork.  When the anon_vma is freed at munmap or
      process exit, we drop the refcount in anon_vma_unlink and possibly free
      the root anon_vma.
      
      The KSM anon_vma reference count function also needs to be modified to
      deal with the possibility of freeing 2 levels of anon_vma.  The easiest
      way to do this is to break out the KSM magic and make it generic.
      
      When compiling without CONFIG_KSM, this code is compiled out.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Tested-by: NLarry Woodman <lwoodman@redhat.com>
      Acked-by: NLarry Woodman <lwoodman@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Tested-by: NDave Young <hidave.darkstar@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      76545066
    • R
      mm: always lock the root (oldest) anon_vma · 012f1800
      Rik van Riel 提交于
      Always (and only) lock the root (oldest) anon_vma whenever we do something
      in an anon_vma.  The recently introduced anon_vma scalability is due to
      the rmap code scanning only the VMAs that need to be scanned.  Many common
      operations still took the anon_vma lock on the root anon_vma, so always
      taking that lock is not expected to introduce any scalability issues.
      
      However, always taking the same lock does mean we only need to take one
      lock, which means rmap_walk on pages from any anon_vma in the vma is
      excluded from occurring during an munmap, expand_stack or other operation
      that needs to exclude rmap_walk and similar functions.
      
      Also add the proper locking to vma_adjust.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Tested-by: NLarry Woodman <lwoodman@redhat.com>
      Acked-by: NLarry Woodman <lwoodman@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      012f1800
    • R
      mm: track the root (oldest) anon_vma · 5c341ee1
      Rik van Riel 提交于
      Track the root (oldest) anon_vma in each anon_vma tree.  Because we only
      take the lock on the root anon_vma, we cannot use the lock on higher-up
      anon_vmas to lock anything.  This makes it impossible to do an indirect
      lookup of the root anon_vma, since the data structures could go away from
      under us.
      
      However, a direct pointer is safe because the root anon_vma is always the
      last one that gets freed on munmap or exit, by virtue of the same_vma list
      order and unlink_anon_vmas walking the list forward.
      
      [akpm@linux-foundation.org: fix typo]
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Tested-by: NLarry Woodman <lwoodman@redhat.com>
      Acked-by: NLarry Woodman <lwoodman@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c341ee1