1. 27 10月, 2010 1 次提交
  2. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1
  3. 13 1月, 2010 2 次提交
    • P
      sh: Rename split-level pgtable headers. · e44d6c40
      Paul Mundt 提交于
      These were originally named _nopmd and _pmd to follow their asm-generic
      counterparts, but we rename them to -2level and -3level for general
      consistency.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      e44d6c40
    • P
      sh: default to extended TLB support. · 782bb5a5
      Paul Mundt 提交于
      All SH-X2 and SH-X3 parts support an extended TLB mode, which has been
      left as experimental since support was originally merged. Now that it's
      had some time to stabilize and get some exposure to various platforms,
      we can drop it as an option and default enable it across the board.
      
      This is also good future proofing for newer parts that will drop support
      for the legacy TLB mode completely.
      
      This will also force 3-level page tables for all newer parts, which is
      necessary both for the varying page sizes and larger memories.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      782bb5a5
  4. 02 1月, 2010 1 次提交
    • M
      sh: Move page table allocation out of line · 2a5eacca
      Matt Fleming 提交于
      We also switched away from quicklists and instead moved to slab
      caches. After benchmarking both implementations the difference is
      negligible. The slab caches suit us better though because the size of a
      pgd table is just 4 entries when we're using a 3-level page table layout
      and quicklists always deal with pages.
      Signed-off-by: NMatt Fleming <matt@console-pimps.org>
      2a5eacca
  5. 17 12月, 2009 2 次提交
  6. 10 10月, 2009 2 次提交
  7. 15 8月, 2009 1 次提交
  8. 04 8月, 2009 1 次提交
  9. 28 7月, 2009 1 次提交
  10. 27 7月, 2009 2 次提交
  11. 22 7月, 2009 1 次提交
    • P
      sh: Migrate from PG_mapped to PG_dcache_dirty. · 2277ab4a
      Paul Mundt 提交于
      This inverts the delayed dcache flush a bit to be more in line with other
      platforms. At the same time this also gives us the ability to do some
      more optimizations and cleanup. Now that the update_mmu_cache() callsite
      only tests for the bit, the implementation can gradually be split out and
      made generic, rather than relying on special implementations for each of
      the peculiar CPU types.
      
      SH7705 in 32kB mode and SH-4 still need slightly different handling, but
      this is something that can remain isolated in the varying page copy/clear
      routines. On top of that, SH-X3 is dcache coherent, so there is no need
      to bother with any of these tests in the PTEAEX version of
      update_mmu_cache(), so we kill that off too.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      2277ab4a
  12. 07 5月, 2009 1 次提交
  13. 10 11月, 2008 1 次提交
  14. 12 9月, 2008 1 次提交
  15. 29 7月, 2008 1 次提交
  16. 28 1月, 2008 6 次提交
  17. 07 11月, 2007 1 次提交
  18. 30 10月, 2007 1 次提交
    • P
      sh: Correct pte_page() breakage. · afca0357
      Paul Mundt 提交于
      As noted by David:
      
      pte_page() is a macro defined as follows;
      
          include/asm-sh/pgtable.h
          #define pte_page(x)    phys_to_page(pte_val(x)&PTE_PHYS_MASK)
      
          include/asm-sh/page.h
          #define phys_to_page(phys)    (pfn_to_page(phys >> PAGE_SHIFT))
      
      So as you can see the phys_to_page() macro doesn't wrap the 'phys'
      parameter in parentheses so we end up with;
      
          pte_val(x)&PTE_PHYS_MASK >> PAGE_SHIFT
      
      Which is not what we wanted as '>>' has a higher precedence than bitwise
      AND. I dug into the git repository and I believe this bug was added with
      this commit (104b8dea);
      
      2006-03-27 KAMEZAWA Hiroyuki [PATCH] unify pfn_to_page: sh pfn_to_page
      
      -#define phys_to_page(phys)     (mem_map + (((phys)-__MEMORY_START) >>
      PAGE_SHIFT))
      -#define page_to_phys(page)     (((page - mem_map) << PAGE_SHIFT) +
      __MEMORY_START)
      +#define phys_to_page(phys)     (pfn_to_page(phys >> PAGE_SHIFT))
      +#define page_to_phys(page)     (page_to_pfn(page) << PAGE_SHIFT)
      Reported-by: NDavid ADDISON <david.addison@st.com>
      Reported-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      afca0357
  19. 21 9月, 2007 2 次提交
    • P
      sh: Fix up extended mode TLB for SH-X2+ cores. · d04a0f79
      Paul Mundt 提交于
      The extended mode TLB requires both 64-bit PTEs and a 64-bit pgprot,
      correspondingly, the PGD also has to be 64-bits, so fix that up.
      
      The kernel and user permission bits really are decoupled in early
      cuts of the silicon, which means that we also have to set corresponding
      kernel permissions on user pages or we end up with user pages that the
      kernel simply can't touch (!).
      
      Finally, with those things corrected, really enable MMUCR.ME and
      correct the PTEA value (this simply needs to be the upper 32-bits
      of the PTE, with the size and protection bit encoding).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      d04a0f79
    • P
      sh: Support explicit L1 cache disabling. · e7bd34a1
      Paul Mundt 提交于
      This reworks the cache mode configuration in Kconfig, and allows for
      explicit selection of write-back/write-through/off configurations.
      All of the cache flushing routines are optimized away for the off
      case.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      e7bd34a1
  20. 25 7月, 2007 1 次提交
  21. 17 7月, 2007 1 次提交
  22. 09 5月, 2007 1 次提交
  23. 05 3月, 2007 1 次提交
  24. 14 2月, 2007 1 次提交
  25. 13 2月, 2007 2 次提交
    • P
      sh: Lazy dcache writeback optimizations. · 26b7a78c
      Paul Mundt 提交于
      This converts the lazy dcache handling to the model described in
      Documentation/cachetlb.txt and drops the ptep_get_and_clear() hacks
      used for the aliasing dcaches on SH-4 and SH7705 in 32kB mode. As a
      bonus, this slightly cuts down on the cache flushing frequency.
      
      With that and the PTEA handling out of the way, the update_mmu_cache()
      implementations can be consolidated, and we no longer have to worry
      about which configuration the cache is in for the SH7705 case.
      
      And finally, explicitly disable the lazy writeback on SMP (SH-4A).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      26b7a78c
    • P
      sh: More tidying for large base pages. · 7a847f81
      Paul Mundt 提交于
      There were a few more things that needed fixing up, namely THREAD_SIZE
      and the TLB miss handler where certain PTRS_PER_PGD == PTRS_PER_PTE
      assumptions were being made.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      7a847f81
  26. 12 12月, 2006 2 次提交
  27. 06 12月, 2006 2 次提交
    • P
      sh: Fixup pte_mkhuge() build failure. · 5b67954e
      Paul Mundt 提交于
      When hugetlbpage support isn't enabled, this can be bogus.
      Wrap it back in _PAGE_FLAGS_HARD to avoid changes to the
      base PTE when not aiming for larger sizes.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      5b67954e
    • P
      sh: Fixup various PAGE_SIZE == 4096 assumptions. · 510c72ad
      Paul Mundt 提交于
      There were a number of places that made evil PAGE_SIZE == 4k
      assumptions that ended up breaking when trying to play with
      8k and 64k page sizes, this fixes those up.
      
      The most significant change is the way we load THREAD_SIZE,
      previously this was done via:
      
      	mov	#(THREAD_SIZE >> 8), reg
      	shll8	reg
      
      to avoid a memory access and allow the immediate load. With
      a 64k PAGE_SIZE, we're out of range for the immediate load
      size without resorting to special instructions available in
      later ISAs (movi20s and so on). The "workaround" for this is
      to bump up the shift to 10 and insert a shll2, which gives a
      bit more flexibility while still being much cheaper than a
      memory access.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      510c72ad