1. 23 3月, 2006 4 次提交
  2. 22 3月, 2006 2 次提交
    • D
      [PATCH] hugepage: is_aligned_hugepage_range() cleanup · 42b88bef
      David Gibson 提交于
      Quite a long time back, prepare_hugepage_range() replaced
      is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to
      verify if an address range is suitable for a hugepage mapping.
      is_aligned_hugepage_range() stuck around, but only to implement
      prepare_hugepage_range() on archs which didn't implement their own.
      
      Most archs (everything except ia64 and powerpc) used the same
      implementation of is_aligned_hugepage_range().  On powerpc, which
      implements its own prepare_hugepage_range(), the custom version was never
      used.
      
      In addition, "is_aligned_hugepage_range()" was a bad name, because it
      suggests it returns true iff the given range is a good hugepage range,
      whereas in fact it returns 0-or-error (so the sense is reversed).
      
      This patch cleans up by abolishing is_aligned_hugepage_range().  Instead
      prepare_hugepage_range() is defined directly.  Most archs use the default
      version, which simply checks the given region is aligned to the size of a
      hugepage.  ia64 and powerpc define custom versions.  The ia64 one simply
      checks that the range is in the correct address space region in addition to
      being suitably aligned.  The powerpc version (just as previously) checks
      for suitable addresses, and if necessary performs low-level MMU frobbing to
      set up new areas for use by hugepages.
      
      No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR).
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      42b88bef
    • N
      [PATCH] remove set_page_count() outside mm/ · 7835e98b
      Nick Piggin 提交于
      set_page_count usage outside mm/ is limited to setting the refcount to 1.
      Remove set_page_count from outside mm/, and replace those users with
      init_page_count() and set_page_refcounted().
      
      This allows more debug checking, and tighter control on how code is allowed
      to play around with page->_count.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7835e98b
  3. 09 3月, 2006 1 次提交
    • C
      [IA64] Fix race in the accessed/dirty bit handlers · d8117ce5
      Christoph Lameter 提交于
      A pte may be zapped by the swapper, exiting process, unmapping or page
      migration while the accessed or dirty bit handers are about to run. In that
      case the accessed bit or dirty is set on an zeroed pte which leads the VM to
      conclude that this is a swap pte. This may lead to
      
      - Messages from the vm like
      
      swap_free: Bad swap file entry 4000000000000000
      
      - Processes being aborted
      
      swap_dup: Bad swap file entry 4000000000000000
      VM: killing process ....
      
      Page migration is particular suitable for the creation of this race since
      it needs to remove and restore page table entries.
      
      The fix here is to check for the present bit and simply not update
      the pte if the page is not present anymore. If the page is not present
      then the fault handler should run next which will take care of the problem
      by bringing the page back and then mark the page dirty or move it onto the
      active list.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      d8117ce5
  4. 08 3月, 2006 4 次提交
  5. 01 3月, 2006 2 次提交
  6. 28 2月, 2006 6 次提交
  7. 27 2月, 2006 1 次提交
  8. 17 2月, 2006 2 次提交
  9. 16 2月, 2006 9 次提交
  10. 15 2月, 2006 2 次提交
  11. 10 2月, 2006 4 次提交
  12. 09 2月, 2006 3 次提交