1. 22 2月, 2016 1 次提交
    • M
      powerpc: Add POWER9 cputable entry · c3ab300e
      Michael Neuling 提交于
      Add a cputable entry for POWER9.  More code is required to actually
      boot and run on a POWER9 but this gets the base piece in which we can
      start building on.
      
      Copies over from POWER8 except for:
      - Adds a new CPU_FTR_ARCH_300 bit to start hanging new architecture
         features from (in subsequent patches).
      - Advertises new user features bits PPC_FEATURE2_ARCH_3_00 &
        HAS_IEEE128 when on POWER9.
      - Drops CPU_FTR_SUBCORE.
      - Drops PMU code and machine check.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      c3ab300e
  2. 28 7月, 2014 1 次提交
  3. 11 7月, 2014 1 次提交
  4. 10 1月, 2014 1 次提交
    • S
      powerpc/e6500: TLB miss handler with hardware tablewalk support · 28efc35f
      Scott Wood 提交于
      There are a few things that make the existing hw tablewalk handlers
      unsuitable for e6500:
      
       - Indirect entries go in TLB1 (though the resulting direct entries go in
         TLB0).
      
       - It has threads, but no "tlbsrx." -- so we need a spinlock and
         a normal "tlbsx".  Because we need this lock, hardware tablewalk
         is mandatory on e6500 unless we want to add spinlock+tlbsx to
         the normal bolted TLB miss handler.
      
       - TLB1 has no HES (nor next-victim hint) so we need software round robin
         (TODO: integrate this round robin data with hugetlb/KVM)
      
       - The existing tablewalk handlers map half of a page table at a time,
         because IBM hardware has a fixed 1MiB indirect page size.  e6500
         has variable size indirect entries, with a minimum of 2MiB.
         So we can't do the half-page indirect mapping, and even if we
         could it would be less efficient than mapping the full page.
      
       - Like on e5500, the linear mapping is bolted, so we don't need the
         overhead of supporting nested tlb misses.
      
      Note that hardware tablewalk does not work in rev1 of e6500.
      We do not expect to support e6500 rev1 in mainline Linux.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Cc: Mihai Caraman <mihai.caraman@freescale.com>
      28efc35f
  5. 15 11月, 2012 1 次提交
  6. 17 9月, 2012 1 次提交
  7. 03 7月, 2012 1 次提交
  8. 20 9月, 2011 1 次提交
    • B
      powerpc: Hugetlb for BookE · 41151e77
      Becky Bruce 提交于
      Enable hugepages on Freescale BookE processors.  This allows the kernel to
      use huge TLB entries to map pages, which can greatly reduce the number of
      TLB misses and the amount of TLB thrashing experienced by applications with
      large memory footprints.  Care should be taken when using this on FSL
      processors, as the number of large TLB entries supported by the core is low
      (16-64) on current processors.
      
      The supported set of hugepage sizes include 4m, 16m, 64m, 256m, and 1g.
      Page sizes larger than the max zone size are called "gigantic" pages and
      must be allocated on the command line (and cannot be deallocated).
      
      This is currently only fully implemented for Freescale 32-bit BookE
      processors, but there is some infrastructure in the code for
      64-bit BooKE.
      Signed-off-by: NBecky Bruce <beckyb@kernel.crashing.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      41151e77
  9. 12 7月, 2011 1 次提交
  10. 08 7月, 2011 1 次提交
  11. 04 5月, 2011 1 次提交
  12. 27 4月, 2011 1 次提交
  13. 05 8月, 2010 1 次提交
    • B
      memblock: Remove rmo_size, burry it in arch/powerpc where it belongs · cd3db0c4
      Benjamin Herrenschmidt 提交于
      The RMA (RMO is a misnomer) is a concept specific to ppc64 (in fact
      server ppc64 though I hijack it on embedded ppc64 for similar purposes)
      and represents the area of memory that can be accessed in real mode
      (aka with MMU off), or on embedded, from the exception vectors (which
      is bolted in the TLB) which pretty much boils down to the same thing.
      
      We take that out of the generic MEMBLOCK data structure and move it into
      arch/powerpc where it belongs, renaming it to "RMA" while at it.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      cd3db0c4
  14. 05 5月, 2010 1 次提交
  15. 28 8月, 2009 1 次提交
  16. 20 8月, 2009 1 次提交
  17. 09 6月, 2009 1 次提交
  18. 21 5月, 2009 1 次提交
  19. 23 4月, 2009 1 次提交
  20. 07 4月, 2009 1 次提交
  21. 24 3月, 2009 2 次提交
  22. 09 3月, 2009 1 次提交
  23. 13 2月, 2009 1 次提交
  24. 21 12月, 2008 2 次提交
  25. 04 8月, 2008 1 次提交
  26. 20 8月, 2007 1 次提交
  27. 03 7月, 2007 2 次提交
  28. 14 6月, 2007 1 次提交
  29. 02 5月, 2007 1 次提交
  30. 27 4月, 2007 1 次提交
    • D
      [POWERPC] Prepare for splitting up mmu.h by MMU type · 8d2169e8
      David Gibson 提交于
      Currently asm-powerpc/mmu.h has definitions for the 64-bit hash based
      MMU.  If CONFIG_PPC64 is not set, it instead includes asm-ppc/mmu.h
      which contains a particularly horrible mess of #ifdefs giving the
      definitions for all the various 32-bit MMUs.
      
      It would be nice to have the low level definitions for each MMU type
      neatly in their own separate files.  It would also be good to wean
      arch/powerpc off dependence on the old asm-ppc/mmu.h.
      
      This patch makes a start on such a cleanup by moving the definitions
      for the 64-bit hash MMU to their own file, asm-powerpc/mmu_hash64.h.
      Definitions for the other MMUs still all come from asm-ppc/mmu.h,
      however each MMU type can now be one-by-one moved over to their own
      file, in the process cleaning them up stripping them of cruft no
      longer necessary in arch/powerpc.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      8d2169e8
  31. 24 4月, 2007 1 次提交
    • A
      [POWERPC] spufs: make spu page faults not block scheduling · 57dace23
      Arnd Bergmann 提交于
      Until now, we have always entered the spu page fault handler
      with a mutex for the spu context held. This has multiple
      bad side-effects:
      - it becomes impossible to suspend the context during
        page faults
      - if an spu program attempts to access its own mmio
        areas through DMA, we get an immediate livelock when
        the nopage function tries to acquire the same mutex
      
      This patch makes the page fault logic operate on a
      struct spu_context instead of a struct spu, and moves it
      from spu_base.c to a new file fault.c inside of spufs.
      
      We now also need to copy the dar and dsisr contents
      of the last fault into the saved context to have it
      accessible in case we schedule out the context before
      activating the page fault handler.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      57dace23
  32. 07 2月, 2007 1 次提交
  33. 16 10月, 2006 1 次提交
  34. 28 6月, 2006 1 次提交
  35. 15 6月, 2006 1 次提交
    • P
      powerpc: Use 64k pages without needing cache-inhibited large pages · bf72aeba
      Paul Mackerras 提交于
      Some POWER5+ machines can do 64k hardware pages for normal memory but
      not for cache-inhibited pages.  This patch lets us use 64k hardware
      pages for most user processes on such machines (assuming the kernel
      has been configured with CONFIG_PPC_64K_PAGES=y).  User processes
      start out using 64k pages and get switched to 4k pages if they use any
      non-cacheable mappings.
      
      With this, we use 64k pages for the vmalloc region and 4k pages for
      the imalloc region.  If anything creates a non-cacheable mapping in
      the vmalloc region, the vmalloc region will get switched to 4k pages.
      I don't know of any driver other than the DRM that would do this,
      though, and these machines don't have AGP.
      
      When a region gets switched from 64k pages to 4k pages, we do not have
      to clear out all the 64k HPTEs from the hash table immediately.  We
      use the _PAGE_COMBO bit in the Linux PTE to indicate whether the page
      was hashed in as a 64k page or a set of 4k pages.  If hash_page is
      trying to insert a 4k page for a Linux PTE and it sees that it has
      already been inserted as a 64k page, it first invalidates the 64k HPTE
      before inserting the 4k HPTE.  The hash invalidation routines also use
      the _PAGE_COMBO bit, to determine whether to look for a 64k HPTE or a
      set of 4k HPTEs to remove.  With those two changes, we can tolerate a
      mix of 4k and 64k HPTEs in the hash table, and they will all get
      removed when the address space is torn down.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      bf72aeba
  36. 09 6月, 2006 2 次提交
    • B
      [PATCH] powerpc: Fix buglet with MMU hash management · c5cf0e30
      Benjamin Herrenschmidt 提交于
      Our MMU hash management code would not set the "C" bit (changed bit) in
      the hardware PTE when updating a RO PTE into a RW PTE. That would cause
      the hardware to possibly to a write back to the hash table to set it on
      the first store access, which in addition to being a performance issue,
      might also hit a bug when running with native hash management (non-HV)
      as our code is specifically optimized for the case where no write back
      happens.
      
      Thus there is a very small therocial window were a hash PTE can become
      corrupted if that HPTE has just been upgraded to read write, a store
      access happens on it, and that races with another processor evicting
      that same slot. Since eviction (caused by an almost full hash) is
      extremely rare, the bug is very unlikely to happen fortunately.
      
      This fixes by allowing the updating of the protection bits in the native
      hash handling to also set (but not clear) the "C" bit, and, in order to
      also improve performances in the general case, by always setting that
      bit on newly inserted hash PTE so that writeback really never happens.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      c5cf0e30
    • B
      [PATCH] powerpc vdso updates · a5bba930
      Benjamin Herrenschmidt 提交于
      This patch cleans up some locking & error handling in the ppc vdso and
      moves the vdso base pointer from the thread struct to the mm context
      where it more logically belongs. It brings the powerpc implementation
      closer to Ingo's new x86 one and also adds an arch_vma_name() function
      allowing to print [vsdo] in /proc/<pid>/maps if Ingo's x86 vdso patch is
      also applied.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a5bba930