1. 14 4月, 2010 1 次提交
    • N
      ARM: 6007/1: fix highmem with VIPT cache and DMA · 7e5a69e8
      Nicolas Pitre 提交于
      The VIVT cache of a highmem page is always flushed before the page
      is unmapped.  This cache flush is explicit through flush_cache_kmaps()
      in flush_all_zero_pkmaps(), or through __cpuc_flush_dcache_area() in
      kunmap_atomic().  There is also an implicit flush of those highmem pages
      that were part of a process that just terminated making those pages free
      as the whole VIVT cache has to be flushed on every task switch. Hence
      unmapped highmem pages need no cache maintenance in that case.
      
      However unmapped pages may still be cached with a VIPT cache because the
      cache is tagged with physical addresses.  There is no need for a whole
      cache flush during task switching for that reason, and despite the
      explicit cache flushes in flush_all_zero_pkmaps() and kunmap_atomic(),
      some highmem pages that were mapped in user space end up still cached
      even when they become unmapped.
      
      So, we do have to perform cache maintenance on those unmapped highmem
      pages in the context of DMA when using a VIPT cache.  Unfortunately,
      it is not possible to perform that cache maintenance using physical
      addresses as all the L1 cache maintenance coprocessor functions accept
      virtual addresses only.  Therefore we have no choice but to set up a
      temporary virtual mapping for that purpose.
      
      And of course the explicit cache flushing when unmapping a highmem page
      on a system with a VIPT cache now can go, which should increase
      performance.
      
      While at it, because the code in __flush_dcache_page() has to be modified
      anyway, let's also make sure the mapped highmem pages are pinned with
      kmap_high_get() for the duration of the cache maintenance operation.
      Because kunmap() does unmap highmem pages lazily, it was reported by
      Gary King <GKing@nvidia.com> that those pages ended up being unmapped
      during cache maintenance on SMP causing segmentation faults.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      7e5a69e8
  2. 16 3月, 2009 2 次提交
    • N
      [ARM] introduce dma_cache_maint_page() · 43377453
      Nicolas Pitre 提交于
      This is a helper to be used by the DMA mapping API to handle cache
      maintenance for memory identified by a page structure instead of a
      virtual address.  Those pages may or may not be highmem pages, and
      when they're highmem pages, they may or may not be virtually mapped.
      When they're not mapped then there is no L1 cache to worry about. But
      even in that case the L2 cache must be processed since unmapped highmem
      pages can still be L2 cached.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      43377453
    • N
      [ARM] kmap support · d73cd428
      Nicolas Pitre 提交于
      The kmap virtual area borrows a 2MB range at the top of the 16MB area
      below PAGE_OFFSET currently reserved for kernel modules and/or the
      XIP kernel.  This 2MB corresponds to the range covered by 2 consecutive
      second-level page tables, or a single pmd entry as seen by the Linux
      page table abstraction.  Because XIP kernels are unlikely to be seen
      on systems needing highmem support, there shouldn't be any shortage of
      VM space for modules (14 MB for modules is still way more than twice the
      typical usage).
      
      Because the virtual mapping of highmem pages can go away at any moment
      after kunmap() is called on them, we need to bypass the delayed cache
      flushing provided by flush_dcache_page() in that case.
      
      The atomic kmap versions are based on fixmaps, and
      __cpuc_flush_dcache_page() is used directly in that case.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      d73cd428