1. 17 12月, 2009 1 次提交
    • Y
      x86: Fix checking of SRAT when node 0 ram is not from 0 · 32996250
      Yinghai Lu 提交于
      Found one system that boot from socket1 instead of socket0, SRAT get rejected...
      
      [    0.000000] SRAT: Node 1 PXM 0 0-a0000
      [    0.000000] SRAT: Node 1 PXM 0 100000-80000000
      [    0.000000] SRAT: Node 1 PXM 0 100000000-2080000000
      [    0.000000] SRAT: Node 0 PXM 1 2080000000-4080000000
      [    0.000000] SRAT: Node 2 PXM 2 4080000000-6080000000
      [    0.000000] SRAT: Node 3 PXM 3 6080000000-8080000000
      [    0.000000] SRAT: Node 4 PXM 4 8080000000-a080000000
      [    0.000000] SRAT: Node 5 PXM 5 a080000000-c080000000
      [    0.000000] SRAT: Node 6 PXM 6 c080000000-e080000000
      [    0.000000] SRAT: Node 7 PXM 7 e080000000-10080000000
      ...
      [    0.000000] NUMA: Allocated memnodemap from 500000 - 701040
      [    0.000000] NUMA: Using 20 for the hash shift.
      [    0.000000] Adding active range (0, 0x2080000, 0x4080000) 0 entries of 3200 used
      [    0.000000] Adding active range (1, 0x0, 0x96) 1 entries of 3200 used
      [    0.000000] Adding active range (1, 0x100, 0x7f750) 2 entries of 3200 used
      [    0.000000] Adding active range (1, 0x100000, 0x2080000) 3 entries of 3200 used
      [    0.000000] Adding active range (2, 0x4080000, 0x6080000) 4 entries of 3200 used
      [    0.000000] Adding active range (3, 0x6080000, 0x8080000) 5 entries of 3200 used
      [    0.000000] Adding active range (4, 0x8080000, 0xa080000) 6 entries of 3200 used
      [    0.000000] Adding active range (5, 0xa080000, 0xc080000) 7 entries of 3200 used
      [    0.000000] Adding active range (6, 0xc080000, 0xe080000) 8 entries of 3200 used
      [    0.000000] Adding active range (7, 0xe080000, 0x10080000) 9 entries of 3200 used
      [    0.000000] SRAT: PXMs only cover 917504MB of your 1048566MB e820 RAM. Not used.
      [    0.000000] SRAT: SRAT not used.
      
      the early_node_map is not sorted because node0 with non zero start come first.
      
      so try to sort it right away after all regions are registered.
      
      also fixs refression by 8716273c (x86: Export srat physical topology)
      
      -v2: make it more solid to handle cross node case like node0 [0,4g), [8,12g) and node1 [4g, 8g), [12g, 16g)
      -v3: update comments.
      Reported-and-tested-by: NJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      LKML-Reference: <4B2579D2.3010201@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      32996250
  2. 26 11月, 2009 2 次提交
    • I
      tracing: Fix kmem event exports · 4d795fb1
      Ingo Molnar 提交于
      Commit 53d0422c ("tracing: Convert some kmem events to DEFINE_EVENT")
      moved the kmem tracepoint creation from util.c to page_alloc.c,
      but forgot to move the exports.
      
      Move them back.
      
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      LKML-Reference: <4B0E286A.2000405@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4d795fb1
    • L
      tracing: Convert some kmem events to DEFINE_EVENT · 53d0422c
      Li Zefan 提交于
      Use DECLARE_EVENT_CLASS to remove duplicate code:
      
         text    data     bss     dec     hex filename
       333987   69800   27228  431015   693a7 mm/built-in.o.old
       330030   69800   27228  427058   68432 mm/built-in.o
      
      8 events are converted:
      
        kmem_alloc: kmalloc, kmem_cache_alloc
        kmem_alloc_node: kmalloc_node, kmem_cache_alloc_node
        kmem_free: kfree, kmem_cache_free
        mm_page: mm_page_alloc_zone_locked, mm_page_pcpu_drain
      
      No change in functionality.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Acked-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      LKML-Reference: <4B0E286A.2000405@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      53d0422c
  3. 12 11月, 2009 2 次提交
  4. 29 10月, 2009 1 次提交
    • A
      revert "mm: oom analysis: add buffer cache information to show_free_areas()" · b76146ed
      Andrew Morton 提交于
      Revert
      
          commit 71de1ccb
          Author:     KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
          AuthorDate: Mon Sep 21 17:01:31 2009 -0700
          Commit:     Linus Torvalds <torvalds@linux-foundation.org>
          CommitDate: Tue Sep 22 07:17:27 2009 -0700
      
              mm: oom analysis: add buffer cache information to show_free_areas()
      
      show_free_areas() is called during page allocation failures, and page
      allocation failures can occur in any calling context.
      
      But nr_blockdev_pages() takes VFS locks which should not be taken from
      hard IRQ context (at least).  The result is lockdep warnings (and
      deadlockability) during page allocation failures.
      
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b76146ed
  5. 24 9月, 2009 1 次提交
  6. 22 9月, 2009 25 次提交
  7. 16 9月, 2009 1 次提交
    • W
      HWPOISON: check and isolate corrupted free pages v2 · 2a7684a2
      Wu Fengguang 提交于
      If memory corruption hits the free buddy pages, we can safely ignore them.
      No one will access them until page allocation time, then prep_new_page()
      will automatically check and isolate PG_hwpoison page for us (for 0-order
      allocation).
      
      This patch expands prep_new_page() to check every component page in a high
      order page allocation, in order to completely stop PG_hwpoison pages from
      being recirculated.
      
      Note that the common case -- only allocating a single page, doesn't
      do any more work than before. Allocating > order 0 does a bit more work,
      but that's relatively uncommon.
      
      This simple implementation may drop some innocent neighbor pages, hopefully
      it is not a big problem because the event should be rare enough.
      
      This patch adds some runtime costs to high order page users.
      
      [AK: Improved description]
      
      v2: Andi Kleen:
      Port to -mm code
      Move check into separate function.
      Don't dump stack in bad_pages for hwpoisoned pages.
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      2a7684a2
  8. 06 9月, 2009 1 次提交
    • M
      page-allocator: always change pageblock ownership when anti-fragmentation is disabled · dd5d241e
      Mel Gorman 提交于
      On low-memory systems, anti-fragmentation gets disabled as fragmentation
      cannot be avoided on a sufficiently large boundary to be worthwhile.  Once
      disabled, there is a period of time when all the pageblocks are marked
      MOVABLE and the expectation is that they get marked UNMOVABLE at each call
      to __rmqueue_fallback().
      
      However, when MAX_ORDER is large the pageblocks do not change ownership
      because the normal criteria are not met.  This has the effect of
      prematurely breaking up too many large contiguous blocks.  This is most
      serious on NOMMU systems which depend on high-order allocations to boot.
      This patch causes pageblocks to change ownership on every fallback when
      anti-fragmentation is disabled.  This prevents the large blocks being
      prematurely broken up.
      
      This is a fix to commit 49255c61 [page
      allocator: move check for disabled anti-fragmentation out of fastpath] and
      the problem affects 2.6.31-rc8.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Tested-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Acked-by: NGreg Ungerer <gerg@snapgear.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd5d241e
  9. 19 8月, 2009 1 次提交
  10. 30 7月, 2009 3 次提交
  11. 11 7月, 2009 1 次提交
  12. 10 7月, 2009 1 次提交