1. 17 12月, 2009 1 次提交
  2. 10 10月, 2009 2 次提交
  3. 15 8月, 2009 1 次提交
  4. 04 8月, 2009 1 次提交
  5. 28 7月, 2009 1 次提交
  6. 27 7月, 2009 2 次提交
  7. 22 7月, 2009 1 次提交
    • P
      sh: Migrate from PG_mapped to PG_dcache_dirty. · 2277ab4a
      Paul Mundt 提交于
      This inverts the delayed dcache flush a bit to be more in line with other
      platforms. At the same time this also gives us the ability to do some
      more optimizations and cleanup. Now that the update_mmu_cache() callsite
      only tests for the bit, the implementation can gradually be split out and
      made generic, rather than relying on special implementations for each of
      the peculiar CPU types.
      
      SH7705 in 32kB mode and SH-4 still need slightly different handling, but
      this is something that can remain isolated in the varying page copy/clear
      routines. On top of that, SH-X3 is dcache coherent, so there is no need
      to bother with any of these tests in the PTEAEX version of
      update_mmu_cache(), so we kill that off too.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      2277ab4a
  8. 07 5月, 2009 1 次提交
  9. 10 11月, 2008 1 次提交
  10. 12 9月, 2008 1 次提交
  11. 29 7月, 2008 1 次提交
  12. 28 1月, 2008 6 次提交
  13. 07 11月, 2007 1 次提交
  14. 30 10月, 2007 1 次提交
    • P
      sh: Correct pte_page() breakage. · afca0357
      Paul Mundt 提交于
      As noted by David:
      
      pte_page() is a macro defined as follows;
      
          include/asm-sh/pgtable.h
          #define pte_page(x)    phys_to_page(pte_val(x)&PTE_PHYS_MASK)
      
          include/asm-sh/page.h
          #define phys_to_page(phys)    (pfn_to_page(phys >> PAGE_SHIFT))
      
      So as you can see the phys_to_page() macro doesn't wrap the 'phys'
      parameter in parentheses so we end up with;
      
          pte_val(x)&PTE_PHYS_MASK >> PAGE_SHIFT
      
      Which is not what we wanted as '>>' has a higher precedence than bitwise
      AND. I dug into the git repository and I believe this bug was added with
      this commit (104b8dea);
      
      2006-03-27 KAMEZAWA Hiroyuki [PATCH] unify pfn_to_page: sh pfn_to_page
      
      -#define phys_to_page(phys)     (mem_map + (((phys)-__MEMORY_START) >>
      PAGE_SHIFT))
      -#define page_to_phys(page)     (((page - mem_map) << PAGE_SHIFT) +
      __MEMORY_START)
      +#define phys_to_page(phys)     (pfn_to_page(phys >> PAGE_SHIFT))
      +#define page_to_phys(page)     (page_to_pfn(page) << PAGE_SHIFT)
      Reported-by: NDavid ADDISON <david.addison@st.com>
      Reported-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      afca0357
  15. 21 9月, 2007 2 次提交
    • P
      sh: Fix up extended mode TLB for SH-X2+ cores. · d04a0f79
      Paul Mundt 提交于
      The extended mode TLB requires both 64-bit PTEs and a 64-bit pgprot,
      correspondingly, the PGD also has to be 64-bits, so fix that up.
      
      The kernel and user permission bits really are decoupled in early
      cuts of the silicon, which means that we also have to set corresponding
      kernel permissions on user pages or we end up with user pages that the
      kernel simply can't touch (!).
      
      Finally, with those things corrected, really enable MMUCR.ME and
      correct the PTEA value (this simply needs to be the upper 32-bits
      of the PTE, with the size and protection bit encoding).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      d04a0f79
    • P
      sh: Support explicit L1 cache disabling. · e7bd34a1
      Paul Mundt 提交于
      This reworks the cache mode configuration in Kconfig, and allows for
      explicit selection of write-back/write-through/off configurations.
      All of the cache flushing routines are optimized away for the off
      case.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      e7bd34a1
  16. 25 7月, 2007 1 次提交
  17. 17 7月, 2007 1 次提交
  18. 09 5月, 2007 1 次提交
  19. 05 3月, 2007 1 次提交
  20. 14 2月, 2007 1 次提交
  21. 13 2月, 2007 2 次提交
    • P
      sh: Lazy dcache writeback optimizations. · 26b7a78c
      Paul Mundt 提交于
      This converts the lazy dcache handling to the model described in
      Documentation/cachetlb.txt and drops the ptep_get_and_clear() hacks
      used for the aliasing dcaches on SH-4 and SH7705 in 32kB mode. As a
      bonus, this slightly cuts down on the cache flushing frequency.
      
      With that and the PTEA handling out of the way, the update_mmu_cache()
      implementations can be consolidated, and we no longer have to worry
      about which configuration the cache is in for the SH7705 case.
      
      And finally, explicitly disable the lazy writeback on SMP (SH-4A).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      26b7a78c
    • P
      sh: More tidying for large base pages. · 7a847f81
      Paul Mundt 提交于
      There were a few more things that needed fixing up, namely THREAD_SIZE
      and the TLB miss handler where certain PTRS_PER_PGD == PTRS_PER_PTE
      assumptions were being made.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      7a847f81
  22. 12 12月, 2006 2 次提交
  23. 06 12月, 2006 5 次提交
    • P
      sh: Fixup pte_mkhuge() build failure. · 5b67954e
      Paul Mundt 提交于
      When hugetlbpage support isn't enabled, this can be bogus.
      Wrap it back in _PAGE_FLAGS_HARD to avoid changes to the
      base PTE when not aiming for larger sizes.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      5b67954e
    • P
      sh: Fixup various PAGE_SIZE == 4096 assumptions. · 510c72ad
      Paul Mundt 提交于
      There were a number of places that made evil PAGE_SIZE == 4k
      assumptions that ended up breaking when trying to play with
      8k and 64k page sizes, this fixes those up.
      
      The most significant change is the way we load THREAD_SIZE,
      previously this was done via:
      
      	mov	#(THREAD_SIZE >> 8), reg
      	shll8	reg
      
      to avoid a memory access and allow the immediate load. With
      a 64k PAGE_SIZE, we're out of range for the immediate load
      size without resorting to special instructions available in
      later ISAs (movi20s and so on). The "workaround" for this is
      to bump up the shift to 10 and insert a shll2, which gives a
      bit more flexibility while still being much cheaper than a
      memory access.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      510c72ad
    • S
      sh: TLB miss fast-path optimizations. · 9b3a53ab
      Stuart Menefy 提交于
      Handle simple TLB miss faults which can be resolved completely
      from the page table in assembler.
      Signed-off-by: NStuart Menefy <stuart.menefy@st.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      9b3a53ab
    • S
      sh: pmd rework. · 99a596f9
      Stuart Menefy 提交于
      Remove extra bits from the pmd structure and store a kernel logical
      address rather than a physical address. This allows it to be directly
      dereferenced. Another piece of wierdness inherited from x86.
      Signed-off-by: NStuart Menefy <stuart.menefy@st.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      99a596f9
    • P
      sh: Preliminary support for SH-X2 MMU. · 21440cf0
      Paul Mundt 提交于
      This adds some preliminary support for the SH-X2 MMU, used by
      newer SH-4A parts (particularly SH7785).
      
      This MMU implements a 'compat' mode with SH-X MMUs and an
      'extended' mode for SH-X2 extended features. Extended features
      include additional page sizes (8kB, 4MB, 64MB), as well as the
      addition of page execute permissions.
      
      The extended mode attributes are placed in a second data array,
      which requires us to switch to 64-bit PTEs when in X2 mode.
      
      With the addition of the exec perms, we also overhaul the mmap
      prots somewhat, now that it's possible to handle them more
      intelligently.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      21440cf0
  24. 27 9月, 2006 3 次提交