1. 27 7月, 2008 1 次提交
  2. 10 7月, 2008 1 次提交
  3. 03 7月, 2008 1 次提交
  4. 09 6月, 2008 1 次提交
  5. 29 4月, 2008 1 次提交
  6. 28 4月, 2008 1 次提交
  7. 24 4月, 2008 2 次提交
    • K
      [POWERPC] Port fixmap from x86 and use for kmap_atomic · 2c419bde
      Kumar Gala 提交于
      The fixmap code from x86 allows us to have compile time virtual addresses
      that we change the physical addresses of at run time.
      
      This is useful for applications like kmap_atomic, PCI config that is done
      via direct memory map, kexec/kdump.
      
      We got ride of CONFIG_HIGHMEM_START as we can now determine a more optimal
      location for PKMAP_BASE based on where the fixmap addresses start and
      working back from there.
      
      Additionally, the kmap code in asm-powerpc/highmem.h always had debug
      enabled.  Moved to using CONFIG_DEBUG_HIGHMEM to determine if we should
      have the extra debug checking.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      2c419bde
    • K
      [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) · 37dd2bad
      Kumar Gala 提交于
      Added support to allow an 85xx kernel to be run from a non-zero physical
      address (useful for cooperative asymmetric multiprocessing situations and
      kdump).  The support can be configured at compile time by setting
      CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as
      desired.
      
      Alternatively, the kernel build can set CONFIG_RELOCATABLE.  Setting this
      config option causes the kernel to determine at runtime the physical
      addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START.  If
      CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning.
      However, CONFIG_PHYSICAL_START will always be used to set the LOAD program
      header physical address field in the resulting ELF image.
      
      Currently we are limited to running at a physical address that is a
      multiple of 256M.  This is due to how we map TLBs to cover
      lowmem.  This should be fixed to allow 64M or maybe even 16M alignment
      in the future.  It is considered an error to try and run a kernel at a
      non-aligned physical address.
      
      All the magic for this support is accomplished by proper initialization
      of the kernel memory subsystem and use of ARCH_PFN_OFFSET.
      
      The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings.
      ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET.
      
      /dev/mem continues to allow access to any physical address in the system
      regardless of how CONFIG_PHYSICAL_START is set.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      37dd2bad
  8. 17 4月, 2008 1 次提交
  9. 01 4月, 2008 2 次提交
  10. 14 2月, 2008 1 次提交
  11. 08 2月, 2008 3 次提交
  12. 06 2月, 2008 1 次提交
  13. 24 1月, 2008 1 次提交
    • K
      [POWERPC] Fix handling of memreserve if the range lands in highmem · f98eeb4e
      Kumar Gala 提交于
      There were several issues if a memreserve range existed and happened
      to be in highmem:
      
      * The bootmem allocator is only aware of lowmem so calling
        reserve_bootmem with a highmem address would cause a BUG_ON
      * All highmem pages were provided to the buddy allocator
      
      Added a lmb_is_reserved() api that we now use to determine if a highem
      page should continue to be PageReserved or provided to the buddy
      allocator.
      
      Also, we incorrectly reported the amount of pages reserved since all
      highmem pages are initally marked reserved and we clear the
      PageReserved flag as we "free" up the highmem pages.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      f98eeb4e
  14. 20 11月, 2007 1 次提交
  15. 17 10月, 2007 1 次提交
  16. 17 8月, 2007 1 次提交
  17. 10 7月, 2007 1 次提交
  18. 14 6月, 2007 1 次提交
  19. 22 5月, 2007 1 次提交
  20. 09 5月, 2007 1 次提交
  21. 02 5月, 2007 1 次提交
  22. 13 4月, 2007 1 次提交
    • B
      [POWERPC] Fix 32-bit mm operations when not using BATs · ee4f2ea4
      Benjamin Herrenschmidt 提交于
      On hash table based 32 bits powerpc's, the hash management code runs with
      a big spinlock. It's thus important that it never causes itself a hash
      fault. That code is generally safe (it does memory accesses in real mode
      among other things) with the exception of the actual access to the code
      itself. That is, the kernel text needs to be accessible without taking
      a hash miss exceptions.
      
      This is currently guaranteed by having a BAT register mapping part of the
      linear mapping permanently, which includes the kernel text. But this is
      not true if using the "nobats" kernel command line option (which can be
      useful for debugging) and will not be true when using DEBUG_PAGEALLOC
      implemented in a subsequent patch.
      
      This patch fixes this by pre-faulting in the hash table pages that hit
      the kernel text, and making sure we never evict such a page under hash
      pressure.
      Signed-off-by: NBenjamin Herrenchmidt <benh@kernel.crashing.org>
      
       arch/powerpc/mm/hash_low_32.S |   22 ++++++++++++++++++++--
       arch/powerpc/mm/mem.c         |    3 ---
       arch/powerpc/mm/mmu_decl.h    |    4 ++++
       arch/powerpc/mm/pgtable_32.c  |   11 +++++++----
       4 files changed, 31 insertions(+), 9 deletions(-)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      ee4f2ea4
  23. 13 2月, 2007 1 次提交
    • B
      [POWERPC] Fix vDSO page count calculation · 7ac9a137
      Benjamin Herrenschmidt 提交于
      The recent vDSO consolidation patches broke powerpc due to a mistake
      in the definition of MAXPAGES constants. This fixes it by moving to
      a dynamically allocated array of pages instead as I don't like much
      hard coded size limits. Also move the vdso initialisation to an initcall
      since it doesn't really need to be done -that- early.
      
      Applogies for not catching the breakage earlier, Roland _did_ CC me on
      his patches a while ago, I got busy with other things and forgot to test
      them.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      7ac9a137
  24. 08 2月, 2007 1 次提交
  25. 07 2月, 2007 1 次提交
  26. 12 10月, 2006 1 次提交
    • M
      [PATCH] mm: use symbolic names instead of indices for zone initialisation · 6391af17
      Mel Gorman 提交于
      Arch-independent zone-sizing is using indices instead of symbolic names to
      offset within an array related to zones (max_zone_pfns).  The unintended
      impact is that ZONE_DMA and ZONE_NORMAL is initialised on powerpc instead
      of ZONE_DMA and ZONE_HIGHMEM when CONFIG_HIGHMEM is set.  As a result, the
      the machine fails to boot but will boot with CONFIG_HIGHMEM turned off.
      
      The following patch properly initialises the max_zone_pfns[] array and uses
      symbolic names instead of indices in each architecture using
      arch-independent zone-sizing.  Two users have successfully booted their
      powerpcs with it (one an ibook G4).  It has also been boot tested on x86,
      x86_64, ppc64 and ia64.  Please merge for 2.6.19-rc2.
      
      Credit to Benjamin Herrenschmidt for identifying the bug and rolling the
      first fix.  Additional credit to Johannes Berg and Andreas Schwab for
      reporting the problem and testing on powerpc.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6391af17
  27. 27 9月, 2006 1 次提交
  28. 01 7月, 2006 1 次提交
  29. 28 6月, 2006 1 次提交
  30. 22 4月, 2006 1 次提交
  31. 28 3月, 2006 1 次提交
  32. 27 3月, 2006 1 次提交
  33. 23 3月, 2006 1 次提交
  34. 22 3月, 2006 2 次提交
  35. 07 2月, 2006 1 次提交
    • M
      [PATCH] powerpc: Always panic if lmb_alloc() fails · d7a5b2ff
      Michael Ellerman 提交于
      Currently most callers of lmb_alloc() don't check if it worked or not, if it
      ever does weird bad things will probably happen. The few callers who do check
      just panic or BUG_ON.
      
      So make lmb_alloc() panic internally, to catch bugs at the source. The few
      callers who did check the result no longer need to.
      
      The only caller that did anything interesting with the return result was
      careful_allocation(). For it we create __lmb_alloc_base() which _doesn't_ panic
      automatically, a little messy, but passable.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d7a5b2ff