1. 10 10月, 2009 2 次提交
  2. 09 10月, 2009 1 次提交
  3. 09 9月, 2009 6 次提交
    • P
      sh: Fix up redundant cache flushing for PAGE_SIZE > 4k. · c4845a4b
      Paul Mundt 提交于
      If PAGE_SIZE is presently over 4k we do a lot of extra flushing given
      that we purge the cache 4k at a time. Make it explicitly 4k per
      iteration, rather than iterating for PAGE_SIZE before looping over again.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      c4845a4b
    • P
      sh: Rework sh4_flush_cache_page() for coherent kmap mapping. · deaef20e
      Paul Mundt 提交于
      This builds on top of the MIPS r4k code that does roughly the same thing.
      This permits the use of kmap_coherent() for mapped pages with dirty
      dcache lines and falls back on kmap_atomic() otherwise.
      
      This also fixes up a problem with the alias check and defers to
      shm_align_mask directly.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      deaef20e
    • P
      sh: Kill off segment-based d-cache flushing on SH-4. · bd6df574
      Paul Mundt 提交于
      This kills off the unrolled segment based flushers on SH-4 and switches
      over to a generic unrolled approach derived from the writethrough segment
      flusher.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      bd6df574
    • P
      sh: Kill off broken PHYSADDR() usage in sh4_flush_dcache_page(). · 31c9efde
      Paul Mundt 提交于
      PHYSADDR() runs in to issues in 32-bit mode when we do not have the
      legacy P1/P2 areas mapped, as such, we need to use page_to_phys()
      directly, which also happens to do the right thing in legacy 29-bit mode.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      31c9efde
    • P
      sh: sh4_flush_cache_mm() optimizations. · 654d364e
      Paul Mundt 提交于
      The i-cache flush in the case of VM_EXEC was added way back when as a
      sanity measure, and in practice we only care about evicting aliases from
      the d-cache. As a result, it's possible to drop the i-cache flush
      completely here.
      
      After careful profiling it's also come up that all of the work associated
      with hunting down aliases and doing ranged flushing ends up generating
      more overhead than simply blasting away the entire dcache, particularly
      if there are many mm's that need to be iterated over. As a result of
      that, just move back to flush_dcache_all() in these cases, which restores
      the old behaviour, and vastly simplifies the path.
      
      Additionally, on platforms without aliases at all, this can simply be
      nopped out. Presently we have the alias check in the SH-4 specific
      version, but this is true for all of the platforms, so move the check up
      to a generic location. This cuts down quite a bit on superfluous cacheop
      IPIs.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      654d364e
    • P
      sh: Cleanup whitespace damage in sh4_flush_icache_range(). · 682f88ab
      Paul Mundt 提交于
      There was quite a lot of tab->space damage done here from a former patch,
      clean it up once and for all.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      682f88ab
  4. 01 9月, 2009 2 次提交
  5. 27 8月, 2009 1 次提交
  6. 24 8月, 2009 2 次提交
  7. 21 8月, 2009 2 次提交
  8. 20 8月, 2009 1 次提交
  9. 15 8月, 2009 6 次提交
  10. 04 8月, 2009 1 次提交
  11. 22 7月, 2009 1 次提交
    • P
      sh: Migrate from PG_mapped to PG_dcache_dirty. · 2277ab4a
      Paul Mundt 提交于
      This inverts the delayed dcache flush a bit to be more in line with other
      platforms. At the same time this also gives us the ability to do some
      more optimizations and cleanup. Now that the update_mmu_cache() callsite
      only tests for the bit, the implementation can gradually be split out and
      made generic, rather than relying on special implementations for each of
      the peculiar CPU types.
      
      SH7705 in 32kB mode and SH-4 still need slightly different handling, but
      this is something that can remain isolated in the varying page copy/clear
      routines. On top of that, SH-X3 is dcache coherent, so there is no need
      to bother with any of these tests in the PTEAEX version of
      update_mmu_cache(), so we kill that off too.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      2277ab4a
  12. 08 9月, 2008 1 次提交
  13. 28 7月, 2008 1 次提交
  14. 28 1月, 2008 1 次提交
  15. 24 9月, 2007 2 次提交
  16. 21 9月, 2007 1 次提交
  17. 25 7月, 2007 1 次提交
  18. 24 7月, 2007 1 次提交
    • P
      sh: Add kmap_coherent()/kunmap_coherent() interface for SH-4. · 8cf1a743
      Paul Mundt 提交于
      This wires up kmap_coherent() and kunmap_coherent() on SH-4, and
      moves away from the p3map_mutex and reserved P3 space, opting to
      use fixmaps for colouring instead.
      
      The copy_user_page()/clear_user_page() implementations are moved
      to this, which fixes the nasty blowups with spinlock debugging
      as a result of having some of these calls nested under the page
      table lock.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      8cf1a743
  19. 05 3月, 2007 1 次提交
  20. 13 2月, 2007 2 次提交
    • P
      sh: Fixup cpu_data references for the non-boot CPUs. · 11c19656
      Paul Mundt 提交于
      There are a lot of bogus cpu_data-> references that only end up working
      for the boot CPU, convert these to current_cpu_data to fixup SMP.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      11c19656
    • P
      sh: Lazy dcache writeback optimizations. · 26b7a78c
      Paul Mundt 提交于
      This converts the lazy dcache handling to the model described in
      Documentation/cachetlb.txt and drops the ptep_get_and_clear() hacks
      used for the aliasing dcaches on SH-4 and SH7705 in 32kB mode. As a
      bonus, this slightly cuts down on the cache flushing frequency.
      
      With that and the PTEA handling out of the way, the update_mmu_cache()
      implementations can be consolidated, and we no longer have to worry
      about which configuration the cache is in for the SH7705 case.
      
      And finally, explicitly disable the lazy writeback on SMP (SH-4A).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      26b7a78c
  21. 12 12月, 2006 1 次提交
  22. 06 12月, 2006 2 次提交
    • P
      sh: Fixup various PAGE_SIZE == 4096 assumptions. · 510c72ad
      Paul Mundt 提交于
      There were a number of places that made evil PAGE_SIZE == 4k
      assumptions that ended up breaking when trying to play with
      8k and 64k page sizes, this fixes those up.
      
      The most significant change is the way we load THREAD_SIZE,
      previously this was done via:
      
      	mov	#(THREAD_SIZE >> 8), reg
      	shll8	reg
      
      to avoid a memory access and allow the immediate load. With
      a 64k PAGE_SIZE, we're out of range for the immediate load
      size without resorting to special instructions available in
      later ISAs (movi20s and so on). The "workaround" for this is
      to bump up the shift to 10 and insert a shll2, which gives a
      bit more flexibility while still being much cheaper than a
      memory access.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      510c72ad
    • P
      sh: p3map_sem sem2mutex conversion. · 52e27782
      Paul Mundt 提交于
      Simple sem2mutex conversion for the p3map semaphores.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      52e27782
  23. 27 9月, 2006 1 次提交