1. 01 5月, 2016 2 次提交
  2. 03 3月, 2016 1 次提交
  3. 22 2月, 2016 1 次提交
    • M
      powerpc: Add POWER9 cputable entry · c3ab300e
      Michael Neuling 提交于
      Add a cputable entry for POWER9.  More code is required to actually
      boot and run on a POWER9 but this gets the base piece in which we can
      start building on.
      
      Copies over from POWER8 except for:
      - Adds a new CPU_FTR_ARCH_300 bit to start hanging new architecture
         features from (in subsequent patches).
      - Advertises new user features bits PPC_FEATURE2_ARCH_3_00 &
        HAS_IEEE128 when on POWER9.
      - Drops CPU_FTR_SUBCORE.
      - Drops PMU code and machine check.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c3ab300e
  4. 28 7月, 2014 1 次提交
  5. 11 7月, 2014 1 次提交
  6. 10 1月, 2014 1 次提交
    • S
      powerpc/e6500: TLB miss handler with hardware tablewalk support · 28efc35f
      Scott Wood 提交于
      There are a few things that make the existing hw tablewalk handlers
      unsuitable for e6500:
      
       - Indirect entries go in TLB1 (though the resulting direct entries go in
         TLB0).
      
       - It has threads, but no "tlbsrx." -- so we need a spinlock and
         a normal "tlbsx".  Because we need this lock, hardware tablewalk
         is mandatory on e6500 unless we want to add spinlock+tlbsx to
         the normal bolted TLB miss handler.
      
       - TLB1 has no HES (nor next-victim hint) so we need software round robin
         (TODO: integrate this round robin data with hugetlb/KVM)
      
       - The existing tablewalk handlers map half of a page table at a time,
         because IBM hardware has a fixed 1MiB indirect page size.  e6500
         has variable size indirect entries, with a minimum of 2MiB.
         So we can't do the half-page indirect mapping, and even if we
         could it would be less efficient than mapping the full page.
      
       - Like on e5500, the linear mapping is bolted, so we don't need the
         overhead of supporting nested tlb misses.
      
      Note that hardware tablewalk does not work in rev1 of e6500.
      We do not expect to support e6500 rev1 in mainline Linux.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Cc: Mihai Caraman <mihai.caraman@freescale.com>
      28efc35f
  7. 15 11月, 2012 1 次提交
  8. 17 9月, 2012 1 次提交
  9. 03 7月, 2012 1 次提交
  10. 20 9月, 2011 1 次提交
    • B
      powerpc: Hugetlb for BookE · 41151e77
      Becky Bruce 提交于
      Enable hugepages on Freescale BookE processors.  This allows the kernel to
      use huge TLB entries to map pages, which can greatly reduce the number of
      TLB misses and the amount of TLB thrashing experienced by applications with
      large memory footprints.  Care should be taken when using this on FSL
      processors, as the number of large TLB entries supported by the core is low
      (16-64) on current processors.
      
      The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g.
      Page sizes larger than the max zone size are called "gigantic" pages and
      must be allocated on the command line (and cannot be deallocated).
      
      This is currently only fully implemented for Freescale 32-bit BookE
      processors, but there is some infrastructure in the code for
      64-bit BooKE.
      Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      41151e77
  11. 12 7月, 2011 1 次提交
  12. 08 7月, 2011 1 次提交
  13. 04 5月, 2011 1 次提交
  14. 27 4月, 2011 1 次提交
  15. 05 8月, 2010 1 次提交
    • B
      memblock: Remove rmo_size, burry it in arch/powerpc where it belongs · cd3db0c4
      Benjamin Herrenschmidt 提交于
      The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
      server ppc64 though I hijack it on embedded ppc64 for similar purposes)
      and represents the area of memory that can be accessed in real mode
      (aka with MMU off), or on embedded, from the exception vectors (which
      is bolted in the TLB) which pretty much boils down to the same thing.
      
      We take that out of the generic MEMBLOCK data structure and move it into
      arch/powerpc where it belongs, renaming it to "RMA" while at it.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cd3db0c4
  16. 05 5月, 2010 1 次提交
  17. 28 8月, 2009 1 次提交
  18. 20 8月, 2009 1 次提交
  19. 09 6月, 2009 1 次提交
  20. 21 5月, 2009 1 次提交
  21. 23 4月, 2009 1 次提交
  22. 07 4月, 2009 1 次提交
  23. 24 3月, 2009 2 次提交
  24. 09 3月, 2009 1 次提交
  25. 13 2月, 2009 1 次提交
  26. 21 12月, 2008 2 次提交
  27. 04 8月, 2008 1 次提交
  28. 20 8月, 2007 1 次提交
  29. 03 7月, 2007 2 次提交
  30. 14 6月, 2007 1 次提交
  31. 02 5月, 2007 1 次提交
  32. 27 4月, 2007 1 次提交
    • D
      [POWERPC] Prepare for splitting up mmu.h by MMU type · 8d2169e8
      David Gibson 提交于
      Currently asm-powerpc/mmu.h has definitions for the 64-bit hash based
      MMU.  If CONFIG_PPC64 is not set, it instead includes asm-ppc/mmu.h
      which contains a particularly horrible mess of #ifdefs giving the
      definitions for all the various 32-bit MMUs.
      
      It would be nice to have the low level definitions for each MMU type
      neatly in their own separate files.  It would also be good to wean
      arch/powerpc off dependence on the old asm-ppc/mmu.h.
      
      This patch makes a start on such a cleanup by moving the definitions
      for the 64-bit hash MMU to their own file, asm-powerpc/mmu_hash64.h.
      Definitions for the other MMUs still all come from asm-ppc/mmu.h,
      however each MMU type can now be one-by-one moved over to their own
      file, in the process cleaning them up stripping them of cruft no
      longer necessary in arch/powerpc.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      8d2169e8
  33. 24 4月, 2007 1 次提交
    • A
      [POWERPC] spufs: make spu page faults not block scheduling · 57dace23
      Arnd Bergmann 提交于
      Until now, we have always entered the spu page fault handler
      with a mutex for the spu context held. This has multiple
      bad side-effects:
      - it becomes impossible to suspend the context during
        page faults
      - if an spu program attempts to access its own mmio
        areas through DMA, we get an immediate livelock when
        the nopage function tries to acquire the same mutex
      
      This patch makes the page fault logic operate on a
      struct spu_context instead of a struct spu, and moves it
      from spu_base.c to a new file fault.c inside of spufs.
      
      We now also need to copy the dar and dsisr contents
      of the last fault into the saved context to have it
      accessible in case we schedule out the context before
      activating the page fault handler.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      57dace23
  34. 07 2月, 2007 1 次提交
  35. 16 10月, 2006 1 次提交
  36. 28 6月, 2006 1 次提交