1. 23 2月, 2009 26 次提交
  2. 15 2月, 2009 1 次提交
  3. 13 2月, 2009 7 次提交
    • M
      powerpc/vsx: Fix VSX alignment handler for regs 32-63 · 26456dcf
      Michael Neuling 提交于
      Fix the VSX alignment handler for VSX registers > 32.  32-63 are stored
      in the VMX part of the thread_struct not the FPR part.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      CC: stable@kernel.org (2.6.27 & .28 please)
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      26456dcf
    • G
      powerpc/ps3: Move ps3_mm_add_memory to device_initcall · 0047656e
      Geoff Levand 提交于
      Change the PS3 hotplug memory routine ps3_mm_add_memory() from
      a core_initcall to a device_initcall.
      
      core_initcall routines run before the powerpc topology_init()
      startup routine, which is a subsys_initcall, resulting in
      failure of ps3_mm_add_memory() when CONFIG_NUMA=y.  When
      ps3_mm_add_memory() fails the system will boot with just the
      128 MiB of boot memory
      Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      0047656e
    • D
      powerpc/mm: Fix numa reserve bootmem page selection · 06eccea6
      Dave Hansen 提交于
      Fix the powerpc NUMA reserve bootmem page selection logic.
      
      commit 8f64e1f2 (powerpc: Reserve
      in bootmem lmb reserved regions that cross NUMA nodes) changed
      the logic for how the powerpc LMB reserved regions were converted
      to bootmen reserved regions.  As the folowing discussion reports,
      the new logic was not correct.
      
      mark_reserved_regions_for_nid() goes through each LMB on the
      system that specifies a reserved area.  It searches for
      active regions that intersect with that LMB and are on the
      specified node.  It attempts to bootmem-reserve only the area
      where the active region and the reserved LMB intersect.  We
      can not reserve things on other nodes as they may not have
      bootmem structures allocated, yet.
      
      We base the size of the bootmem reservation on two possible
      things.  Normally, we just make the reservation start and
      stop exactly at the start and end of the LMB.
      
      However, the LMB reservations are not aware of NUMA nodes and
      on occasion a single LMB may cross into several adjacent
      active regions.  Those may even be on different NUMA nodes
      and will require separate calls to the bootmem reserve
      functions.  So, the bootmem reservation must be trimmed to
      fit inside the current active region.
      
      That's all fine and dandy, but we trim the reservation
      in a page-aligned fashion.  That's bad because we start the
      reservation at a non-page-aligned address: physbase.
      
      The reservation may only span 2 bytes, but that those bytes
      may span two pfns and cause a reserve_size of 2*PAGE_SIZE.
      
      Take the case where you reserve 0x2 bytes at 0x0fff and
      where the active region ends at 0x1000.  You'll jump into
      that if() statment, but node_ar.end_pfn=0x1 and
      start_pfn=0x0.  You'll end up with a reserve_size=0x1000,
      and then call
      
        reserve_bootmem_node(node, physbase=0xfff, size=0x1000);
      
      0x1000 may not be on the same node as 0xfff.  Oops.
      
      In almost all the vm code, end_<anything> is not inclusive.
      If you have an end_pfn of 0x1234, page 0x1234 is not
      included in the range.  Using PFN_UP instead of the
      (>> >> PAGE_SHIFT) will make this consistent with the other VM
      code.
      
      We also need to do math for the reserved size with physbase
      instead of start_pfn.  node_ar.end_pfn << PAGE_SHIFT is
      *precisely* the end of the node.  However,
      (start_pfn << PAGE_SHIFT) is *NOT* precisely the beginning
      of the reserved area.  That is, of course, physbase.
      If we don't use physbase here, the reserve_size can be
      made too large.
      
      From: Dave Hansen <dave@linux.vnet.ibm.com>
      Tested-by: Geoff Levand <geoffrey.levand@am.sony.com>  Tested on PS3.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      06eccea6
    • P
      powerpc/mm: Fix _PAGE_CHG_MASK to protect _PAGE_SPECIAL · fbc78b07
      Philippe Gerum 提交于
      Fix _PAGE_CHG_MASK so that pte_modify() does not affect the _PAGE_SPECIAL bit.
      Signed-off-by: NPhilippe Gerum <rpm@xenomai.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fbc78b07
    • K
      powerpc/fsl-booke: Fix compile warning · 96a8bac5
      Kumar Gala 提交于
      arch/powerpc/mm/fsl_booke_mmu.c: In function 'adjust_total_lowmem':
      arch/powerpc/mm/fsl_booke_mmu.c:221: warning: format '%ld' expects type 'long int', but argument 3 has type 'phys_addr_t'
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      96a8bac5
    • K
      powerpc/book-3e: Introduce concept of Book-3e MMU · 70fe3af8
      Kumar Gala 提交于
      The Power ISA 2.06 spec introduces a standard MMU programming model that
      is based on the Freescale Book-E MMU programing model.  The Freescale
      version is pretty backwards compatiable with the ISA 2.06 definition so
      we are starting to refactor some of the Freescale code so it can be
      easily shared.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      70fe3af8
    • K
      powerpc/fsl-booke: Add new ISA 2.06 page sizes and MAS defines · d66c82ea
      Kumar Gala 提交于
      The Power ISA 2.06 added power of two page sizes to the embedded MMU
      architecture.  Its done it such a way to be code compatiable with the
      existing HW.  Made the minor code changes to support both power of two
      and power of four page sizes.  Also added some new MAS bits and macros
      that are defined as part of the 2.06 ISA.  Renamed some things to use
      the 'Book-3e' concept to convey the new MMU that is based on the
      Freescale Book-E MMU programming model.
      
      Note, its still invalid to try and use a page size that isn't supported
      by cpu.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      d66c82ea
  4. 11 2月, 2009 6 次提交
    • K
      powerpc/85xx: Added 36-bit physical device tree for mpc8572ds board · a2404746
      Kumar Gala 提交于
      Added a device tree that should be identical to mpc8572ds.dtb except
      the physical addresses for all IO are above the 4G boundary.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      a2404746
    • K
      powerpc/85xx: Fixed PCI IO region sizes in mpc8572ds*.dts · ca34040c
      Kumar Gala 提交于
      The PCI IO region sizes where incorrectly set to 1M instead of 64k.
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      ca34040c
    • K
      powerpc/mm: Fix _PAGE_COHERENT support on classic ppc32 HW · f99fb8a2
      Kumar Gala 提交于
      The following commit:
      
      commit 64b3d0e8
      Author: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Date:   Thu Dec 18 19:13:51 2008 +0000
      
          powerpc/mm: Rework usage of _PAGE_COHERENT/NO_CACHE/GUARDED
      
      broke setting of the _PAGE_COHERENT bit in the PPC HW PTE.  Since we now
      actually set _PAGE_COHERENT in the Linux PTE we shouldn't be clearing it
      out before we propogate it to the PPC HW PTE.
      Reported-by: NMartyn Welch <martyn.welch@gefanuc.com>
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f99fb8a2
    • B
      powerpc/mm: Rework I$/D$ coherency (v3) · 8d30c14c
      Benjamin Herrenschmidt 提交于
      This patch reworks the way we do I and D cache coherency on PowerPC.
      
      The "old" way was split in 3 different parts depending on the processor type:
      
         - Hash with per-page exec support (64-bit and >= POWER4 only) does it
      at hashing time, by preventing exec on unclean pages and cleaning pages
      on exec faults.
      
         - Everything without per-page exec support (32-bit hash, 8xx, and
      64-bit < POWER4) does it for all page going to user space in update_mmu_cache().
      
         - Embedded with per-page exec support does it from do_page_fault() on
      exec faults, in a way similar to what the hash code does.
      
      That leads to confusion, and bugs. For example, the method using update_mmu_cache()
      is racy on SMP where another processor can see the new PTE and hash it in before
      we have cleaned the cache, and then blow trying to execute. This is hard to hit but
      I think it has bitten us in the past.
      
      Also, it's inefficient for embedded where we always end up having to do at least
      one more page fault.
      
      This reworks the whole thing by moving the cache sync into two main call sites,
      though we keep different behaviours depending on the HW capability. The call
      sites are set_pte_at() which is now made out of line, and ptep_set_access_flags()
      which joins the former in pgtable.c
      
      The base idea for Embedded with per-page exec support, is that we now do the
      flush at set_pte_at() time when coming from an exec fault, which allows us
      to avoid the double fault problem completely (we can even improve the situation
      more by implementing TLB preload in update_mmu_cache() but that's for later).
      
      If for some reason we didn't do it there and we try to execute, we'll hit
      the page fault, which will do a minor fault, which will hit ptep_set_access_flags()
      to do things like update _PAGE_ACCESSED or _PAGE_DIRTY if needed, we just make
      this guys also perform the I/D cache sync for exec faults now. This second path
      is the catch all for things that weren't cleaned at set_pte_at() time.
      
      For cpus without per-pag exec support, we always do the sync at set_pte_at(),
      thus guaranteeing that when the PTE is visible to other processors, the cache
      is clean.
      
      For the 64-bit hash with per-page exec support case, we keep the old mechanism
      for now. I'll look into changing it later, once I've reworked a bit how we
      use _PAGE_EXEC.
      
      This is also a first step for adding _PAGE_EXEC support for embedded platforms
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8d30c14c
    • G
      powerpc/amigaone: Default config for AmigaOne boards · 4b7ad359
      Gerhard Pircher 提交于
      CONFIG_CC_OPTIMIZE_FOR_SIZE is selected, because otherwise the kernel
      wouldn't boot. The AmigaOne's U-boot firmware seems to have a problem
      loading uImages bigger than 1.8 MB.
      Signed-off-by: NGerhard Pircher <gerhard_pircher@gmx.net>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      4b7ad359
    • G
      powerpc/amigaone: Bootwrapper and serial console support for AmigaOne · 8f23735d
      Gerhard Pircher 提交于
      This adds the bootwrapper for the cuImage target and a compatible property
      check for "pnpPNP,501" to the generic serial console support code.
      The default link address for the cuImage target is set to 0x800000. This
      allows to boot the kernel with AmigaOS4's second level bootloader, which
      always loads a uImage at 0x500000.
      Signed-off-by: NGerhard Pircher <gerhard_pircher@gmx.net>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8f23735d