1. 28 9月, 2009 1 次提交
    • H
      parisc: rename parisc's vmalloc_start to parisc_vmalloc_start · 4255f0d2
      Helge Deller 提交于
      building kernel 2.6.32(pre), gives this compiler warning:
      /linus-linux-2.6/mm/vmalloc.c: In function 'pcpu_get_vm_areas':
      /linus-linux-2.6/mm/vmalloc.c:2104: warning: 'vmalloc_start' is used
      uninitialized in this function
      
      The reason is, that the code in mm/vmalloc defines a local variable called
      vmalloc_start, which is already defined as global variable in parisc's code.
      
      To avoid this kind of problems in future, I suggest to rename the parisc
      variable
      to parisc_vmalloc_start.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Signed-off-by: NKyle McMartin <kyle@mcmartin.ca>
      4255f0d2
  2. 22 9月, 2009 1 次提交
  3. 03 7月, 2009 2 次提交
  4. 31 3月, 2009 1 次提交
    • H
      parisc: fix usage of 32bit PTE page table entries on 32bit kernels · 48d27cb2
      Helge Deller 提交于
      This patch fixes a long outstanding bug on 32bit parisc linux kernels
      which prevented us from using 32bit PTE table entries (instead of 64bit
      entries of which 32bit were unused).
      
      The problem was caused by this assembler statement in the L2_ptep
      macro in arch/parisc/kernel/entry.S:447:
      	EXTR \va,31-ASM_PGDIR_SHIFT,ASM_BITS_PER_PGD,\index
      which expanded to
      	extrw,u r8,9,11,r1
      and which has undefined behavior since the length value (11) extends
      beyond the leftmost bit (11-1 > 9).
      Interestingly PA2.0 processors seem to don't care and just zero-extend
      the value, while PA1.1 processors don't.
      
      Fix this problem by detecting an address space overflow with ASM_BITS_PER_PGD
      and adjusting it accordingly. To prevent such problems in the future,
      some compile time sanity checks in arch/parisc/mm/init.c were added.
      
      Since the page table now only consumes half of it's old size, we can
      use the freed memory to harmonize 32- and 64bit kernels and let both
      map 16MB for the initial page table.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Signed-off-by: NKyle McMartin <kyle@mcmartin.ca>
      48d27cb2
  5. 13 3月, 2009 1 次提交
  6. 25 7月, 2008 2 次提交
  7. 26 6月, 2008 1 次提交
  8. 13 6月, 2008 1 次提交
  9. 15 5月, 2008 1 次提交
  10. 13 5月, 2008 1 次提交
  11. 28 4月, 2008 2 次提交
    • M
      mm: have zonelist contains structs with both a zone pointer and zone_idx · dd1a239f
      Mel Gorman 提交于
      Filtering zonelists requires very frequent use of zone_idx().  This is costly
      as it involves a lookup of another structure and a substraction operation.  As
      the zone_idx is often required, it should be quickly accessible.  The node idx
      could also be stored here if it was found that accessing zone->node is
      significant which may be the case on workloads where nodemasks are heavily
      used.
      
      This patch introduces a struct zoneref to store a zone pointer and a zone
      index.  The zonelist then consists of an array of these struct zonerefs which
      are looked up as necessary.  Helpers are given for accessing the zone index as
      well as the node index.
      
      [kamezawa.hiroyu@jp.fujitsu.com: Suggested struct zoneref instead of embedding information in pointers]
      [hugh@veritas.com: mm-have-zonelist: fix memcg ooms]
      [hugh@veritas.com: just return do_try_to_free_pages]
      [hugh@veritas.com: do_try_to_free_pages gfp_mask redundant]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd1a239f
    • M
      mm: use two zonelist that are filtered by GFP mask · 54a6eb5c
      Mel Gorman 提交于
      Currently a node has two sets of zonelists, one for each zone type in the
      system and a second set for GFP_THISNODE allocations.  Based on the zones
      allowed by a gfp mask, one of these zonelists is selected.  All of these
      zonelists consume memory and occupy cache lines.
      
      This patch replaces the multiple zonelists per-node with two zonelists.  The
      first contains all populated zones in the system, ordered by distance, for
      fallback allocations when the target/preferred node has no free pages.  The
      second contains all populated zones in the node suitable for GFP_THISNODE
      allocations.
      
      An iterator macro is introduced called for_each_zone_zonelist() that interates
      through each zone allowed by the GFP flags in the selected zonelist.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NLee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      54a6eb5c
  12. 08 2月, 2008 1 次提交
    • B
      Introduce flags for reserve_bootmem() · 72a7fe39
      Bernhard Walle 提交于
      This patchset adds a flags variable to reserve_bootmem() and uses the
      BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
      between crashkernel area and already used memory.
      
      This patch:
      
      Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
      If that flag is set, the function returns with -EBUSY if the memory already
      has been reserved in the past.  This is to avoid conflicts.
      
      Because that code runs before SMP initialisation, there's no race condition
      inside reserve_bootmem_core().
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix powerpc build]
      Signed-off-by: NBernhard Walle <bwalle@suse.de>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      72a7fe39
  13. 18 10月, 2007 1 次提交
  14. 23 5月, 2007 1 次提交
  15. 17 2月, 2007 1 次提交
  16. 13 2月, 2007 2 次提交
  17. 12 2月, 2007 1 次提交
  18. 08 12月, 2006 3 次提交
  19. 04 10月, 2006 1 次提交
  20. 26 9月, 2006 2 次提交
  21. 02 7月, 2006 1 次提交
  22. 01 7月, 2006 1 次提交
  23. 22 4月, 2006 1 次提交
  24. 31 3月, 2006 1 次提交
  25. 22 3月, 2006 1 次提交
  26. 23 1月, 2006 3 次提交
  27. 22 1月, 2006 1 次提交
  28. 11 1月, 2006 3 次提交
  29. 30 10月, 2005 1 次提交
    • D
      [PATCH] memory hotplug locking: node_size_lock · 208d54e5
      Dave Hansen 提交于
      pgdat->node_size_lock is basically only neeeded in one place in the normal
      code: show_mem(), which is the arch-specific sysrq-m printing function.
      
      Strictly speaking, the architectures not doing memory hotplug do no need this
      locking in show_mem().  However, they are all included for completeness.  This
      should also make any future consolidation of all of the implementations a
      little more straightforward.
      
      This lock is also held in the sparsemem code during a memory removal, as
      sections are invalidated.  This is the place there pfn_valid() is made false
      for a memory area that's being removed.  The lock is only required when doing
      pfn_valid() operations on memory which the user does not already have a
      reference on the page, such as in show_mem().
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      208d54e5