1. 24 12月, 2007 1 次提交
    • B
      [POWERPC] 4xx: PLB to PCI Express support · a2d2e1ec
      Benjamin Herrenschmidt 提交于
      This adds to the previous 2 patches the support for the 4xx PCI Express
      cells as found in the 440SPe revA, revB and 405EX.
      
      Unfortunately, due to significant differences between these, and other
      interesting "features" of those pieces of HW, the code isn't as simple
      as it is for PCI and PCI-X and some of the functions differ significantly
      between the 3 implementations. Thus, not only this code can only support
      those 3 implementations for now and will refuse to operate on any other,
      but there are added ifdef's to avoid the bloat of building a fairly large
      amount of code on platforms that don't need it.
      
      Also, this code currently only supports fully initializing root complex
      nodes, not endpoint. Some more code will have to be lifted from the
      arch/ppc implementation to add the endpoint support, though it's mostly
      differences in memory mapping, and the question on how to represent
      endpoint mode PCI in the device-tree is thus open.
      
      Many thanks to Stefan Roese for testing & fixing up the 405EX bits !
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NStefan Roese <sr@denx.de>
      Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
      a2d2e1ec
  2. 21 12月, 2007 1 次提交
  3. 30 11月, 2007 1 次提交
    • M
      Fix boot problem with iSeries lacking hugepage support · ba72cb8c
      Mel Gorman 提交于
      Ordinarily the size of a pageblock is determined at compile-time based on the
      hugepage size. On PPC64, the hugepage size is determined at runtime based on
      what is supported by the machine. With legacy machines such as iSeries that
      do not support hugepages, HPAGE_SHIFT is 0. This results in pageblock_order
      being set to -PAGE_SHIFT and a crash results shortly afterwards.
      
      This patch adds a function to select a sensible value for pageblock order by
      default when HUGETLB_PAGE_SIZE_VARIABLE is set. It checks that HPAGE_SHIFT
      is a sensible value before using the hugepage size; if it is not MAX_ORDER-1
      is used.
      
      This is a fix for 2.6.24.
      
      Credit goes to Stephen Rothwell for identifying the bug and testing candidate
      patches.  Additional credit goes to Andy Whitcroft for spotting a problem
      with respects to IA-64 before releasing. Additional credit to David Gibson
      for testing with the libhugetlbfs test suite.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Tested-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba72cb8c
  4. 20 10月, 2007 1 次提交
  5. 17 10月, 2007 3 次提交
  6. 12 10月, 2007 1 次提交
  7. 05 10月, 2007 2 次提交
  8. 04 10月, 2007 1 次提交
  9. 03 10月, 2007 6 次提交
  10. 13 9月, 2007 1 次提交
  11. 24 7月, 2007 1 次提交
  12. 23 7月, 2007 1 次提交
  13. 20 7月, 2007 1 次提交
  14. 18 7月, 2007 1 次提交
  15. 17 7月, 2007 1 次提交
  16. 12 7月, 2007 1 次提交
  17. 29 6月, 2007 2 次提交
  18. 25 6月, 2007 1 次提交
  19. 14 6月, 2007 2 次提交
  20. 23 5月, 2007 1 次提交
  21. 12 5月, 2007 2 次提交
  22. 09 5月, 2007 3 次提交
    • H
      [POWERPC] Don't use SLAB/SLUB for PTE pages · 517e2263
      Hugh Dickins 提交于
      The SLUB allocator relies on struct page fields first_page and slab,
      overwritten by ptl when SPLIT_PTLOCK: so the SLUB allocator cannot then
      be used for the lowest level of pagetable pages.  This was obstructing
      SLUB on PowerPC, which uses kmem_caches for its pagetables.  So convert
      its pte level to use normal gfp pages (whereas pmd, pud and 64k-page pgd
      want partpages, so continue to use kmem_caches for pmd, pud and pgd).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      517e2263
    • B
      [POWERPC] Add ability to 4K kernel to hash in 64K pages · 16c2d476
      Benjamin Herrenschmidt 提交于
      This adds the ability for a kernel compiled with 4K page size
      to have special slices containing 64K pages and hash the right type
      of hash PTEs.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      16c2d476
    • B
      [POWERPC] Introduce address space "slices" · d0f13e3c
      Benjamin Herrenschmidt 提交于
      The basic issue is to be able to do what hugetlbfs does but with
      different page sizes for some other special filesystems; more
      specifically, my need is:
      
       - Huge pages
      
       - SPE local store mappings using 64K pages on a 4K base page size
      kernel on Cell
      
       - Some special 4K segments in 64K-page kernels for mapping a dodgy
      type of powerpc-specific infiniband hardware that requires 4K MMU
      mappings for various reasons I won't explain here.
      
      The main issues are:
      
       - To maintain/keep track of the page size per "segment" (as we can
      only have one page size per segment on powerpc, which are 256MB
      divisions of the address space).
      
       - To make sure special mappings stay within their allotted
      "segments" (including MAP_FIXED crap)
      
       - To make sure everybody else doesn't mmap/brk/grow_stack into a
      "segment" that is used for a special mapping
      
      Some of the necessary mechanisms to handle that were present in the
      hugetlbfs code, but mostly in ways not suitable for anything else.
      
      The patch relies on some changes to the generic get_unmapped_area()
      that just got merged.  It still hijacks hugetlb callbacks here or
      there as the generic code hasn't been entirely cleaned up yet but
      that shouldn't be a problem.
      
      So what is a slice ?  Well, I re-used the mechanism used formerly by our
      hugetlbfs implementation which divides the address space in
      "meta-segments" which I called "slices".  The division is done using
      256MB slices below 4G, and 1T slices above.  Thus the address space is
      divided currently into 16 "low" slices and 16 "high" slices.  (Special
      case: high slice 0 is the area between 4G and 1T).
      
      Doing so simplifies significantly the tracking of segments and avoids
      having to keep track of all the 256MB segments in the address space.
      
      While I used the "concepts" of hugetlbfs, I mostly re-implemented
      everything in a more generic way and "ported" hugetlbfs to it.
      
      Slices can have an associated page size, which is encoded in the mmu
      context and used by the SLB miss handler to set the segment sizes.  The
      hash code currently doesn't care, it has a specific check for hugepages,
      though I might add a mechanism to provide per-slice hash mapping
      functions in the future.
      
      The slice code provide a pair of "generic" get_unmapped_area() (bottomup
      and topdown) functions that should work with any slice size.  There is
      some trickiness here so I would appreciate people to have a look at the
      implementation of these and let me know if I got something wrong.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d0f13e3c
  23. 08 5月, 2007 4 次提交
  24. 07 5月, 2007 1 次提交