1. 19 8月, 2009 1 次提交
  2. 30 7月, 2009 3 次提交
  3. 11 7月, 2009 1 次提交
  4. 10 7月, 2009 1 次提交
  5. 07 7月, 2009 1 次提交
    • K
      Fix virt_to_phys() warnings · 5bfd7560
      Kevin Cernekee 提交于
      These warnings were observed on MIPS32 using 2.6.31-rc1 and gcc-4.2.0:
      
      mm/page_alloc.c: In function 'alloc_pages_exact':
      mm/page_alloc.c:1986: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast
      
      drivers/usb/mon/mon_bin.c: In function 'mon_alloc_buff':
      drivers/usb/mon/mon_bin.c:1264: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast
      
      [akpm@linux-foundation.org: fix kernel/perf_counter.c too]
      Signed-off-by: NKevin Cernekee <cernekee@gmail.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5bfd7560
  6. 01 7月, 2009 1 次提交
    • Y
      x86: only clear node_states for 64bit · 66918dcd
      Yinghai Lu 提交于
      Nathan reported that
      
      | commit 73d60b7f
      | Author: Yinghai Lu <yinghai@kernel.org>
      | Date:   Tue Jun 16 15:33:00 2009 -0700
      |
      |    page-allocator: clear N_HIGH_MEMORY map before we set it again
      |
      |    SRAT tables may contains nodes of very small size.  The arch code may
      |    decide to not activate such a node.  However, currently the early boot
      |    code sets N_HIGH_MEMORY for such nodes.  These nodes therefore seem to be
      |    active although these nodes have no present pages.
      |
      |    For 64bit N_HIGH_MEMORY == N_NORMAL_MEMORY, so that works for 64 bit too
      
      unintentionally and incorrectly clears the cpuset.mems cgroup attribute on
      an i386 kvm guest, meaning that cpuset.mems can not be used.
      
      Fix this by only clearing node_states[N_NORMAL_MEMORY] for 64bit only.
      and need to do save/restore for that in find_zone_movable_pfn
      Reported-by: NNathan Lynch <ntl@pobox.com>
      Tested-by: NNathan Lynch <ntl@pobox.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>,
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      66918dcd
  7. 25 6月, 2009 1 次提交
  8. 24 6月, 2009 1 次提交
  9. 21 6月, 2009 1 次提交
  10. 19 6月, 2009 1 次提交
  11. 17 6月, 2009 28 次提交
    • M
      vmscan: do not unconditionally treat zones that fail zone_reclaim() as full · fa5e084e
      Mel Gorman 提交于
      On NUMA machines, the administrator can configure zone_reclaim_mode that
      is a more targetted form of direct reclaim.  On machines with large NUMA
      distances for example, a zone_reclaim_mode defaults to 1 meaning that
      clean unmapped pages will be reclaimed if the zone watermarks are not
      being met.  The problem is that zone_reclaim() failing at all means the
      zone gets marked full.
      
      This can cause situations where a zone is usable, but is being skipped
      because it has been considered full.  Take a situation where a large tmpfs
      mount is occuping a large percentage of memory overall.  The pages do not
      get cleaned or reclaimed by zone_reclaim(), but the zone gets marked full
      and the zonelist cache considers them not worth trying in the future.
      
      This patch makes zone_reclaim() return more fine-grained information about
      what occured when zone_reclaim() failued.  The zone only gets marked full
      if it really is unreclaimable.  If it's a case that the scan did not occur
      or if enough pages were not reclaimed with the limited reclaim_mode, then
      the zone is simply skipped.
      
      There is a side-effect to this patch.  Currently, if zone_reclaim()
      successfully reclaimed SWAP_CLUSTER_MAX, an allocation attempt would go
      ahead.  With this patch applied, zone watermarks are rechecked after
      zone_reclaim() does some work.
      
      This bug was introduced by commit 9276b1bc
      ("memory page_alloc zonelist caching speedup") way back in 2.6.19 when the
      zonelist_cache was introduced.  It was not intended that zone_reclaim()
      aggressively consider the zone to be full when it failed as full direct
      reclaim can still be an option.  Due to the age of the bug, it should be
      considered a -stable candidate.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NWu Fengguang <fengguang.wu@intel.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fa5e084e
    • Y
      page-allocator: clear N_HIGH_MEMORY map before we set it again · 73d60b7f
      Yinghai Lu 提交于
      SRAT tables may contains nodes of very small size.  The arch code may
      decide to not activate such a node.  However, currently the early boot
      code sets N_HIGH_MEMORY for such nodes.  These nodes therefore seem to be
      active although these nodes have no present pages.
      
      For 64bit N_HIGH_MEMORY == N_NORMAL_MEMORY, so that works for 64 bit too
      Signed-off-by: NYinghai Lu <Yinghai@kernel.org>
      Tested-by: NJack Steiner <steiner@sgi.com>
      Acked-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      73d60b7f
    • D
      oom: invoke oom killer for __GFP_NOFAIL · 82553a93
      David Rientjes 提交于
      The oom killer must be invoked regardless of the order if the allocation
      is __GFP_NOFAIL, otherwise it will loop forever when reclaim fails to free
      some memory.
      
      Cc: Nick Piggin <npiggin@suse.de>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      82553a93
    • K
      mm: remove CONFIG_UNEVICTABLE_LRU config option · 68377659
      KOSAKI Motohiro 提交于
      Currently, nobody wants to turn UNEVICTABLE_LRU off.  Thus this
      configurability is unnecessary.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Acked-by: NMinchan Kim <minchan.kim@gmail.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      68377659
    • M
      page-allocator: reset wmark_min and inactive ratio of zone when hotplug happens · bce7394a
      Minchan Kim 提交于
      Solve two problems.
      
      Whenever memory hotplug sucessfully happens, zone->present_pages
      have to be changed.
      
      1) Now memory hotplug calls setup_per_zone_wmark_min only when
         online_pages called, not offline_pages.
      
         It breaks balance.
      
      2) If zone->present_pages is changed, we also have to change
         zone->inactive_ratio.  That's because inactive_ratio depends on
         zone->present_pages.
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Acked-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bce7394a
    • M
      page-allocator: add inactive ratio calculation function of each zone · 96cb4df5
      Minchan Kim 提交于
      Factor the per-zone arithemetic inside setup_per_zone_inactive_ratio()'s
      loop into a a separate function, calculate_zone_inactive_ratio().  This
      function will be used in a later patch
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      96cb4df5
    • M
      page-allocator: clean up functions related to pages_min · bc75d33f
      Minchan Kim 提交于
      Change the names of two functions. It doesn't affect behavior.
      
      Presently, setup_per_zone_pages_min() changes low, high of zone as well as
      min.  So a better name is setup_per_zone_wmarks().  That's because Mel
      changed zone->pages_[hig/low/min] to zone->watermark array in "page
      allocator: replace the watermark-related union in struct zone with a
      watermark[] array".
      
       * setup_per_zone_pages_min => setup_per_zone_wmarks
      
      Of course, we have to change init_per_zone_pages_min, too.  There are not
      pages_min any more.
      
       * init_per_zone_pages_min => init_per_zone_wmark_min
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bc75d33f
    • R
      mm, PM/Freezer: Disable OOM killer when tasks are frozen · 7f33d49a
      Rafael J. Wysocki 提交于
      Currently, the following scenario appears to be possible in theory:
      
      * Tasks are frozen for hibernation or suspend.
      * Free pages are almost exhausted.
      * Certain piece of code in the suspend code path attempts to allocate
        some memory using GFP_KERNEL and allocation order less than or
        equal to PAGE_ALLOC_COSTLY_ORDER.
      * __alloc_pages_internal() cannot find a free page so it invokes the
        OOM killer.
      * The OOM killer attempts to kill a task, but the task is frozen, so
        it doesn't die immediately.
      * __alloc_pages_internal() jumps to 'restart', unsuccessfully tries
        to find a free page and invokes the OOM killer.
      * No progress can be made.
      
      Although it is now hard to trigger during hibernation due to the memory
      shrinking carried out by the hibernation code, it is theoretically
      possible to trigger during suspend after the memory shrinking has been
      removed from that code path.  Moreover, since memory allocations are
      going to be used for the hibernation memory shrinking, it will be even
      more likely to happen during hibernation.
      
      To prevent it from happening, introduce the oom_killer_disabled switch
      that will cause __alloc_pages_internal() to fail in the situations in
      which the OOM killer would have been called and make the freezer set
      this switch after tasks have been successfully frozen.
      
      [akpm@linux-foundation.org: be nicer to the namespace]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: Fengguang Wu <fengguang.wu@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Acked-by: NPavel Machek <pavel@ucw.cz>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f33d49a
    • A
      page-allocator: warn if __GFP_NOFAIL is used for a large allocation · dab48dab
      Andrew Morton 提交于
      __GFP_NOFAIL is a bad fiction.  Allocations _can_ fail, and callers should
      detect and suitably handle this (and not by lamely moving the infinite
      loop up to the caller level either).
      
      Attempting to use __GFP_NOFAIL for a higher-order allocation is even
      worse, so add a once-off runtime check for this to slap people around for
      even thinking about trying it.
      
      Cc: David Rientjes <rientjes@google.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dab48dab
    • C
      mm: setup_per_zone_inactive_ratio - fix comment and make it __init · e9bb35df
      Cyrill Gorcunov 提交于
      The caller of setup_per_zone_inactive_ratio is an __init function.  There
      is no need to keep the callee after it completed as well.  Also fix a
      comment.
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e9bb35df
    • C
      mm: setup_per_zone_inactive_ratio - do not call for int_sqrt if not needed · 5c87eada
      Cyrill Gorcunov 提交于
      int_sqrt() returns 0 if its argument is zero so call it if only needed.
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c87eada
    • W
      vmscan: cleanup the scan batching code · 6e08a369
      Wu Fengguang 提交于
      The vmscan batching logic is twisting.  Move it into a standalone function
      nr_scan_try_batch() and document it.  No behavior change.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6e08a369
    • W
      mm: introduce PageHuge() for testing huge/gigantic pages · 20a0307c
      Wu Fengguang 提交于
      A series of patches to enhance the /proc/pagemap interface and to add a
      userspace executable which can be used to present the pagemap data.
      
      Export 10 more flags to end users (and more for kernel developers):
      
              11. KPF_MMAP            (pseudo flag) memory mapped page
              12. KPF_ANON            (pseudo flag) memory mapped page (anonymous)
              13. KPF_SWAPCACHE       page is in swap cache
              14. KPF_SWAPBACKED      page is swap/RAM backed
              15. KPF_COMPOUND_HEAD   (*)
              16. KPF_COMPOUND_TAIL   (*)
              17. KPF_HUGE		hugeTLB pages
              18. KPF_UNEVICTABLE     page is in the unevictable LRU list
              19. KPF_HWPOISON        hardware detected corruption
              20. KPF_NOPAGE          (pseudo flag) no page frame at the address
      
              (*) For compound pages, exporting _both_ head/tail info enables
                  users to tell where a compound page starts/ends, and its order.
      
      a simple demo of the page-types tool
      
      # ./page-types -h
      page-types [options]
                  -r|--raw                  Raw mode, for kernel developers
                  -a|--addr    addr-spec    Walk a range of pages
                  -b|--bits    bits-spec    Walk pages with specified bits
                  -l|--list                 Show page details in ranges
                  -L|--list-each            Show page details one by one
                  -N|--no-summary           Don't show summay info
                  -h|--help                 Show this usage message
      addr-spec:
                  N                         one page at offset N (unit: pages)
                  N+M                       pages range from N to N+M-1
                  N,M                       pages range from N to M-1
                  N,                        pages range from N to end
                  ,M                        pages range from 0 to M
      bits-spec:
                  bit1,bit2                 (flags & (bit1|bit2)) != 0
                  bit1,bit2=bit1            (flags & (bit1|bit2)) == bit1
                  bit1,~bit2                (flags & (bit1|bit2)) == bit1
                  =bit1,bit2                flags == (bit1|bit2)
      bit-names:
                locked              error         referenced           uptodate
                 dirty                lru             active               slab
             writeback            reclaim              buddy               mmap
             anonymous          swapcache         swapbacked      compound_head
         compound_tail               huge        unevictable           hwpoison
                nopage           reserved(r)         mlocked(r)    mappedtodisk(r)
               private(r)       private_2(r)   owner_private(r)            arch(r)
              uncached(r)       readahead(o)       slob_free(o)     slub_frozen(o)
            slub_debug(o)
                                         (r) raw mode bits  (o) overloaded bits
      
      # ./page-types
                   flags      page-count       MB  symbolic-flags                     long-symbolic-flags
      0x0000000000000000          487369     1903  _________________________________
      0x0000000000000014               5        0  __R_D____________________________  referenced,dirty
      0x0000000000000020               1        0  _____l___________________________  lru
      0x0000000000000024              34        0  __R__l___________________________  referenced,lru
      0x0000000000000028            3838       14  ___U_l___________________________  uptodate,lru
      0x0001000000000028              48        0  ___U_l_______________________I___  uptodate,lru,readahead
      0x000000000000002c            6478       25  __RU_l___________________________  referenced,uptodate,lru
      0x000100000000002c              47        0  __RU_l_______________________I___  referenced,uptodate,lru,readahead
      0x0000000000000040            8344       32  ______A__________________________  active
      0x0000000000000060               1        0  _____lA__________________________  lru,active
      0x0000000000000068             348        1  ___U_lA__________________________  uptodate,lru,active
      0x0001000000000068              12        0  ___U_lA______________________I___  uptodate,lru,active,readahead
      0x000000000000006c             988        3  __RU_lA__________________________  referenced,uptodate,lru,active
      0x000100000000006c              48        0  __RU_lA______________________I___  referenced,uptodate,lru,active,readahead
      0x0000000000004078               1        0  ___UDlA_______b__________________  uptodate,dirty,lru,active,swapbacked
      0x000000000000407c              34        0  __RUDlA_______b__________________  referenced,uptodate,dirty,lru,active,swapbacked
      0x0000000000000400             503        1  __________B______________________  buddy
      0x0000000000000804               1        0  __R________M_____________________  referenced,mmap
      0x0000000000000828            1029        4  ___U_l_____M_____________________  uptodate,lru,mmap
      0x0001000000000828              43        0  ___U_l_____M_________________I___  uptodate,lru,mmap,readahead
      0x000000000000082c             382        1  __RU_l_____M_____________________  referenced,uptodate,lru,mmap
      0x000100000000082c              12        0  __RU_l_____M_________________I___  referenced,uptodate,lru,mmap,readahead
      0x0000000000000868             192        0  ___U_lA____M_____________________  uptodate,lru,active,mmap
      0x0001000000000868              12        0  ___U_lA____M_________________I___  uptodate,lru,active,mmap,readahead
      0x000000000000086c             800        3  __RU_lA____M_____________________  referenced,uptodate,lru,active,mmap
      0x000100000000086c              31        0  __RU_lA____M_________________I___  referenced,uptodate,lru,active,mmap,readahead
      0x0000000000004878               2        0  ___UDlA____M__b__________________  uptodate,dirty,lru,active,mmap,swapbacked
      0x0000000000001000             492        1  ____________a____________________  anonymous
      0x0000000000005808               4        0  ___U_______Ma_b__________________  uptodate,mmap,anonymous,swapbacked
      0x0000000000005868            2839       11  ___U_lA____Ma_b__________________  uptodate,lru,active,mmap,anonymous,swapbacked
      0x000000000000586c              30        0  __RU_lA____Ma_b__________________  referenced,uptodate,lru,active,mmap,anonymous,swapbacked
                   total          513968     2007
      
      # ./page-types -r
                   flags      page-count       MB  symbolic-flags                     long-symbolic-flags
      0x0000000000000000          468002     1828  _________________________________
      0x0000000100000000           19102       74  _____________________r___________  reserved
      0x0000000000008000              41        0  _______________H_________________  compound_head
      0x0000000000010000             188        0  ________________T________________  compound_tail
      0x0000000000008014               1        0  __R_D__________H_________________  referenced,dirty,compound_head
      0x0000000000010014               4        0  __R_D___________T________________  referenced,dirty,compound_tail
      0x0000000000000020               1        0  _____l___________________________  lru
      0x0000000800000024              34        0  __R__l__________________P________  referenced,lru,private
      0x0000000000000028            3794       14  ___U_l___________________________  uptodate,lru
      0x0001000000000028              46        0  ___U_l_______________________I___  uptodate,lru,readahead
      0x0000000400000028              44        0  ___U_l_________________d_________  uptodate,lru,mappedtodisk
      0x0001000400000028               2        0  ___U_l_________________d_____I___  uptodate,lru,mappedtodisk,readahead
      0x000000000000002c            6434       25  __RU_l___________________________  referenced,uptodate,lru
      0x000100000000002c              47        0  __RU_l_______________________I___  referenced,uptodate,lru,readahead
      0x000000040000002c              14        0  __RU_l_________________d_________  referenced,uptodate,lru,mappedtodisk
      0x000000080000002c              30        0  __RU_l__________________P________  referenced,uptodate,lru,private
      0x0000000800000040            8124       31  ______A_________________P________  active,private
      0x0000000000000040             219        0  ______A__________________________  active
      0x0000000800000060               1        0  _____lA_________________P________  lru,active,private
      0x0000000000000068             322        1  ___U_lA__________________________  uptodate,lru,active
      0x0001000000000068              12        0  ___U_lA______________________I___  uptodate,lru,active,readahead
      0x0000000400000068              13        0  ___U_lA________________d_________  uptodate,lru,active,mappedtodisk
      0x0000000800000068              12        0  ___U_lA_________________P________  uptodate,lru,active,private
      0x000000000000006c             977        3  __RU_lA__________________________  referenced,uptodate,lru,active
      0x000100000000006c              48        0  __RU_lA______________________I___  referenced,uptodate,lru,active,readahead
      0x000000040000006c               5        0  __RU_lA________________d_________  referenced,uptodate,lru,active,mappedtodisk
      0x000000080000006c               3        0  __RU_lA_________________P________  referenced,uptodate,lru,active,private
      0x0000000c0000006c               3        0  __RU_lA________________dP________  referenced,uptodate,lru,active,mappedtodisk,private
      0x0000000c00000068               1        0  ___U_lA________________dP________  uptodate,lru,active,mappedtodisk,private
      0x0000000000004078               1        0  ___UDlA_______b__________________  uptodate,dirty,lru,active,swapbacked
      0x000000000000407c              34        0  __RUDlA_______b__________________  referenced,uptodate,dirty,lru,active,swapbacked
      0x0000000000000400             538        2  __________B______________________  buddy
      0x0000000000000804               1        0  __R________M_____________________  referenced,mmap
      0x0000000000000828            1029        4  ___U_l_____M_____________________  uptodate,lru,mmap
      0x0001000000000828              43        0  ___U_l_____M_________________I___  uptodate,lru,mmap,readahead
      0x000000000000082c             382        1  __RU_l_____M_____________________  referenced,uptodate,lru,mmap
      0x000100000000082c              12        0  __RU_l_____M_________________I___  referenced,uptodate,lru,mmap,readahead
      0x0000000000000868             192        0  ___U_lA____M_____________________  uptodate,lru,active,mmap
      0x0001000000000868              12        0  ___U_lA____M_________________I___  uptodate,lru,active,mmap,readahead
      0x000000000000086c             800        3  __RU_lA____M_____________________  referenced,uptodate,lru,active,mmap
      0x000100000000086c              31        0  __RU_lA____M_________________I___  referenced,uptodate,lru,active,mmap,readahead
      0x0000000000004878               2        0  ___UDlA____M__b__________________  uptodate,dirty,lru,active,mmap,swapbacked
      0x0000000000001000             492        1  ____________a____________________  anonymous
      0x0000000000005008               2        0  ___U________a_b__________________  uptodate,anonymous,swapbacked
      0x0000000000005808               4        0  ___U_______Ma_b__________________  uptodate,mmap,anonymous,swapbacked
      0x000000000000580c               1        0  __RU_______Ma_b__________________  referenced,uptodate,mmap,anonymous,swapbacked
      0x0000000000005868            2839       11  ___U_lA____Ma_b__________________  uptodate,lru,active,mmap,anonymous,swapbacked
      0x000000000000586c              29        0  __RU_lA____Ma_b__________________  referenced,uptodate,lru,active,mmap,anonymous,swapbacked
                   total          513968     2007
      
      # ./page-types --raw --list --no-summary --bits reserved
      offset  count   flags
      0       15      _____________________r___________
      31      4       _____________________r___________
      159     97      _____________________r___________
      4096    2067    _____________________r___________
      6752    2390    _____________________r___________
      9355    3       _____________________r___________
      9728    14526   _____________________r___________
      
      This patch:
      
      Introduce PageHuge(), which identifies huge/gigantic pages by their
      dedicated compound destructor functions.
      
      Also move prep_compound_gigantic_page() to hugetlb.c and make
      __free_pages_ok() non-static.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      20a0307c
    • M
      mm: use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic · a1dd268c
      Mel Gorman 提交于
      alloc_large_system_hash() has logic for freeing pages at the end of an
      excessively large power-of-two buffer that is a duplicate of what is in
      alloc_pages_exact().  This patch converts alloc_large_system_hash() to use
      alloc_pages_exact().
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NHugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a1dd268c
    • M
      page allocator: sanity check order in the page allocator slow path · 72807a74
      Mel Gorman 提交于
      Callers may speculatively call different allocators in order of preference
      trying to allocate a buffer of a given size.  The order needed to allocate
      this may be larger than what the page allocator can normally handle.
      While the allocator mostly does the right thing, it should not direct
      reclaim or wakeup kswapd with a bogus order.  This patch sanity checks the
      order in the slow path and returns NULL if it is too large.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NDave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72807a74
    • K
      page allocator: move free_page_mlock() to page_alloc.c · 092cead6
      KOSAKI Motohiro 提交于
      Currently, free_page_mlock() is only called from page_alloc.c.  Thus, we
      can move it to page_alloc.c.
      
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      092cead6
    • C
      page allocator: use a pre-calculated value instead of num_online_nodes() in fast paths · 62bc62a8
      Christoph Lameter 提交于
      num_online_nodes() is called in a number of places but most often by the
      page allocator when deciding whether the zonelist needs to be filtered
      based on cpusets or the zonelist cache.  This is actually a heavy function
      and touches a number of cache lines.
      
      This patch stores the number of online nodes at boot time and updates the
      value when nodes get onlined and offlined.  The value is then used in a
      number of important paths in place of num_online_nodes().
      
      [rientjes@google.com: do not override definition of node_set_online() with macro]
      Signed-off-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62bc62a8
    • M
      page allocator: get the pageblock migratetype without disabling interrupts · 974709bd
      Mel Gorman 提交于
      Local interrupts are disabled when freeing pages to the PCP list.  Part of
      that free checks what the migratetype of the pageblock the page is in but
      it checks this with interrupts disabled and interupts should never be
      disabled longer than necessary.  This patch checks the pagetype with
      interrupts enabled with the impact that it is possible a page is freed to
      the wrong list when a pageblock changes type.  As that block is now
      already considered mixed from an anti-fragmentation perspective, it's not
      of vital importance.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      974709bd
    • M
      page allocator: update NR_FREE_PAGES only as necessary · f2260e6b
      Mel Gorman 提交于
      When pages are being freed to the buddy allocator, the zone NR_FREE_PAGES
      counter must be updated.  In the case of bulk per-cpu page freeing, it's
      updated once per page.  This retouches cache lines more than necessary.
      Update the counters one per per-cpu bulk free.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f2260e6b
    • M
      page allocator: use allocation flags as an index to the zone watermark · 41858966
      Mel Gorman 提交于
      ALLOC_WMARK_MIN, ALLOC_WMARK_LOW and ALLOC_WMARK_HIGH determin whether
      pages_min, pages_low or pages_high is used as the zone watermark when
      allocating the pages.  Two branches in the allocator hotpath determine
      which watermark to use.
      
      This patch uses the flags as an array index into a watermark array that is
      indexed with WMARK_* defines accessed via helpers.  All call sites that
      use zone->pages_* are updated to use the helpers for accessing the values
      and the array offsets for setting.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      41858966
    • N
      page allocator: do not check for compound pages during the page allocator sanity checks · a3af9c38
      Nick Piggin 提交于
      A number of sanity checks are made on each page allocation and free
      including that the page count is zero.  page_count() checks for compound
      pages and checks the count of the head page if true.  However, in these
      paths, we do not care if the page is compound or not as the count of each
      tail page should also be zero.
      
      This patch makes two changes to the use of page_count() in the free path.
      It converts one check of page_count() to a VM_BUG_ON() as the count should
      have been unconditionally checked earlier in the free path.  It also
      avoids checking for compound pages.
      
      [mel@csn.ul.ie: Wrote changelog]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a3af9c38
    • M
      page allocator: do not setup zonelist cache when there is only one node · d395b734
      Mel Gorman 提交于
      There is a zonelist cache which is used to track zones that are not in the
      allowed cpuset or found to be recently full.  This is to reduce cache
      footprint on large machines.  On smaller machines, it just incurs cost for
      no gain.  This patch only uses the zonelist cache when there are NUMA
      nodes.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d395b734
    • M
      page allocator: do not disable interrupts in free_page_mlock() · da456f14
      Mel Gorman 提交于
      free_page_mlock() tests and clears PG_mlocked using locked versions of the
      bit operations.  If set, it disables interrupts to update counters and
      this happens on every page free even though interrupts are disabled very
      shortly afterwards a second time.  This is wasteful.
      
      This patch splits what free_page_mlock() does.  The bit check is still
      made.  However, the update of counters is delayed until the interrupts are
      disabled and the non-lock version for clearing the bit is used.  One
      potential weirdness with this split is that the counters do not get
      updated if the bad_page() check is triggered but a system showing bad
      pages is getting screwed already.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Acked-by: NLee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      da456f14
    • M
      page allocator: do not call get_pageblock_migratetype() more than necessary · ed0ae21d
      Mel Gorman 提交于
      get_pageblock_migratetype() is potentially called twice for every page
      free.  Once, when being freed to the pcp lists and once when being freed
      back to buddy.  When freeing from the pcp lists, it is known what the
      pageblock type was at the time of free so use it rather than rechecking.
      In low memory situations under memory pressure, this might skew
      anti-fragmentation slightly but the interference is minimal and decisions
      that are fragmenting memory are being made anyway.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ed0ae21d
    • M
      page allocator: inline __rmqueue_fallback() · 0ac3a409
      Mel Gorman 提交于
      __rmqueue_fallback() is in the slow path but has only one call site.
      Because there is only one call-site, this function can then be inlined
      without causing text bloat.  On an x86-based config, it made no difference
      as the savings were padded out by NOP instructions.  Milage varies but
      text will either decrease in size or remain static.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ac3a409
    • M
      page allocator: inline buffered_rmqueue() · 0a15c3e9
      Mel Gorman 提交于
      buffered_rmqueue() is in the fast path so inline it.  Because it only has
      one call site, this function can then be inlined without causing text
      bloat.  On an x86-based config, it made no difference as the savings were
      padded out by NOP instructions.  Milage varies but text will either
      decrease in size or remain static.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0a15c3e9
    • M
      page allocator: inline __rmqueue_smallest() · 728ec980
      Mel Gorman 提交于
      Inline __rmqueue_smallest by altering flow very slightly so that there is
      only one call site.  Because there is only one call-site, this function
      can then be inlined without causing text bloat.  On an x86-based config,
      this patch reduces text by 16 bytes.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      728ec980
    • M
      page allocator: remove a branch by assuming __GFP_HIGH == ALLOC_HIGH · a56f57ff
      Mel Gorman 提交于
      Allocations that specify __GFP_HIGH get the ALLOC_HIGH flag.  If these
      flags are equal to each other, we can eliminate a branch.
      
      [akpm@linux-foundation.org: Suggested the hack]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a56f57ff