1. 30 1月, 2008 1 次提交
    • N
      spinlock: lockbreak cleanup · 95c354fe
      Nick Piggin 提交于
      The break_lock data structure and code for spinlocks is quite nasty.
      Not only does it double the size of a spinlock but it changes locking to
      a potentially less optimal trylock.
      
      Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a
      __raw_spin_is_contended that uses the lock data itself to determine whether
      there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is
      not set.
      
      Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to
      decouple it from the spinlock implementation, and make it typesafe (rwlocks
      do not have any need_lockbreak sites -- why do they even get bloated up
      with that break_lock then?).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      95c354fe
  2. 30 11月, 2007 1 次提交
    • M
      Fix boot problem with iSeries lacking hugepage support · ba72cb8c
      Mel Gorman 提交于
      Ordinarily the size of a pageblock is determined at compile-time based on the
      hugepage size. On PPC64, the hugepage size is determined at runtime based on
      what is supported by the machine. With legacy machines such as iSeries that
      do not support hugepages, HPAGE_SHIFT is 0. This results in pageblock_order
      being set to -PAGE_SHIFT and a crash results shortly afterwards.
      
      This patch adds a function to select a sensible value for pageblock order by
      default when HUGETLB_PAGE_SIZE_VARIABLE is set. It checks that HPAGE_SHIFT
      is a sensible value before using the hugepage size; if it is not MAX_ORDER-1
      is used.
      
      This is a fix for 2.6.24.
      
      Credit goes to Stephen Rothwell for identifying the bug and testing candidate
      patches.  Additional credit goes to Andy Whitcroft for spotting a problem
      with respects to IA-64 before releasing. Additional credit to David Gibson
      for testing with the libhugetlbfs test suite.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Tested-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba72cb8c
  3. 20 10月, 2007 1 次提交
  4. 17 10月, 2007 3 次提交
  5. 12 10月, 2007 1 次提交
  6. 05 10月, 2007 2 次提交
  7. 04 10月, 2007 1 次提交
  8. 03 10月, 2007 6 次提交
  9. 13 9月, 2007 1 次提交
  10. 24 7月, 2007 1 次提交
  11. 23 7月, 2007 1 次提交
  12. 20 7月, 2007 1 次提交
  13. 18 7月, 2007 1 次提交
  14. 17 7月, 2007 1 次提交
  15. 12 7月, 2007 1 次提交
  16. 29 6月, 2007 2 次提交
  17. 25 6月, 2007 1 次提交
  18. 14 6月, 2007 2 次提交
  19. 23 5月, 2007 1 次提交
  20. 12 5月, 2007 2 次提交
  21. 09 5月, 2007 3 次提交
    • H
      [POWERPC] Don't use SLAB/SLUB for PTE pages · 517e2263
      Hugh Dickins 提交于
      The SLUB allocator relies on struct page fields first_page and slab,
      overwritten by ptl when SPLIT_PTLOCK: so the SLUB allocator cannot then
      be used for the lowest level of pagetable pages.  This was obstructing
      SLUB on PowerPC, which uses kmem_caches for its pagetables.  So convert
      its pte level to use normal gfp pages (whereas pmd, pud and 64k-page pgd
      want partpages, so continue to use kmem_caches for pmd, pud and pgd).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      517e2263
    • B
      [POWERPC] Add ability to 4K kernel to hash in 64K pages · 16c2d476
      Benjamin Herrenschmidt 提交于
      This adds the ability for a kernel compiled with 4K page size
      to have special slices containing 64K pages and hash the right type
      of hash PTEs.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      16c2d476
    • B
      [POWERPC] Introduce address space "slices" · d0f13e3c
      Benjamin Herrenschmidt 提交于
      The basic issue is to be able to do what hugetlbfs does but with
      different page sizes for some other special filesystems; more
      specifically, my need is:
      
       - Huge pages
      
       - SPE local store mappings using 64K pages on a 4K base page size
      kernel on Cell
      
       - Some special 4K segments in 64K-page kernels for mapping a dodgy
      type of powerpc-specific infiniband hardware that requires 4K MMU
      mappings for various reasons I won't explain here.
      
      The main issues are:
      
       - To maintain/keep track of the page size per "segment" (as we can
      only have one page size per segment on powerpc, which are 256MB
      divisions of the address space).
      
       - To make sure special mappings stay within their allotted
      "segments" (including MAP_FIXED crap)
      
       - To make sure everybody else doesn't mmap/brk/grow_stack into a
      "segment" that is used for a special mapping
      
      Some of the necessary mechanisms to handle that were present in the
      hugetlbfs code, but mostly in ways not suitable for anything else.
      
      The patch relies on some changes to the generic get_unmapped_area()
      that just got merged.  It still hijacks hugetlb callbacks here or
      there as the generic code hasn't been entirely cleaned up yet but
      that shouldn't be a problem.
      
      So what is a slice ?  Well, I re-used the mechanism used formerly by our
      hugetlbfs implementation which divides the address space in
      "meta-segments" which I called "slices".  The division is done using
      256MB slices below 4G, and 1T slices above.  Thus the address space is
      divided currently into 16 "low" slices and 16 "high" slices.  (Special
      case: high slice 0 is the area between 4G and 1T).
      
      Doing so simplifies significantly the tracking of segments and avoids
      having to keep track of all the 256MB segments in the address space.
      
      While I used the "concepts" of hugetlbfs, I mostly re-implemented
      everything in a more generic way and "ported" hugetlbfs to it.
      
      Slices can have an associated page size, which is encoded in the mmu
      context and used by the SLB miss handler to set the segment sizes.  The
      hash code currently doesn't care, it has a specific check for hugepages,
      though I might add a mechanism to provide per-slice hash mapping
      functions in the future.
      
      The slice code provide a pair of "generic" get_unmapped_area() (bottomup
      and topdown) functions that should work with any slice size.  There is
      some trickiness here so I would appreciate people to have a look at the
      implementation of these and let me know if I got something wrong.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d0f13e3c
  22. 08 5月, 2007 4 次提交
  23. 07 5月, 2007 1 次提交
  24. 02 5月, 2007 1 次提交