1. 17 6月, 2013 1 次提交
  2. 17 4月, 2013 1 次提交
  3. 09 10月, 2012 1 次提交
    • M
      mm: replace vma prio_tree with an interval tree · 6b2dbba8
      Michel Lespinasse 提交于
      Implement an interval tree as a replacement for the VMA prio_tree.  The
      algorithms are similar to lib/interval_tree.c; however that code can't be
      directly reused as the interval endpoints are not explicitly stored in the
      VMA.  So instead, the common algorithm is moved into a template and the
      details (node type, how to get interval endpoints from the node, etc) are
      filled in using the C preprocessor.
      
      Once the interval tree functions are available, using them as a
      replacement to the VMA prio tree is a relatively simple, mechanical job.
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6b2dbba8
  4. 11 8月, 2012 1 次提交
  5. 29 3月, 2012 1 次提交
  6. 27 1月, 2012 2 次提交
  7. 21 5月, 2011 1 次提交
  8. 16 5月, 2011 1 次提交
  9. 20 12月, 2010 1 次提交
    • N
      ARM: get rid of kmap_high_l1_vipt() · 39af22a7
      Nicolas Pitre 提交于
      Since commit 3e4d3af5 "mm: stack based kmap_atomic()", it is no longer
      necessary to carry an ad hoc version of kmap_atomic() added in commit
      7e5a69e8 "ARM: 6007/1: fix highmem with VIPT cache and DMA" to cope
      with reentrancy.
      
      In fact, it is now actively wrong to rely on fixed kmap type indices
      (namely KM_L1_CACHE) as kmap_atomic() totally ignores them now and a
      concurrent instance of it may reuse any slot for any purpose.
      Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
      39af22a7
  10. 15 11月, 2010 1 次提交
  11. 05 10月, 2010 1 次提交
    • W
      ARM: 6386/1: flush_ptrace_access: invalidate correct I-cache alias · c4e259c8
      Will Deacon 提交于
      copy_to_user_page can be used by access_process_vm to write to an
      executable page of a process using a mapping acquired by kmap.
      For systems with I-cache aliasing, flushing the I-cache using the
      Kernel mapping may leave stale data in the I-cache if the user
      mapping is of a different colour.
      
      This patch introduces a flush_icache_alias function to flush.c,
      which calls flush_icache_range with a mapping of the specified
      colour. flush_ptrace_access is then modified to call this new
      function instead of coherent_kern_range in the case of an aliasing
      I-cache and a non-aliasing D-cache.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      c4e259c8
  12. 19 9月, 2010 4 次提交
  13. 14 4月, 2010 1 次提交
    • N
      ARM: 6007/1: fix highmem with VIPT cache and DMA · 7e5a69e8
      Nicolas Pitre 提交于
      The VIVT cache of a highmem page is always flushed before the page
      is unmapped.  This cache flush is explicit through flush_cache_kmaps()
      in flush_all_zero_pkmaps(), or through __cpuc_flush_dcache_area() in
      kunmap_atomic().  There is also an implicit flush of those highmem pages
      that were part of a process that just terminated making those pages free
      as the whole VIVT cache has to be flushed on every task switch. Hence
      unmapped highmem pages need no cache maintenance in that case.
      
      However unmapped pages may still be cached with a VIPT cache because the
      cache is tagged with physical addresses.  There is no need for a whole
      cache flush during task switching for that reason, and despite the
      explicit cache flushes in flush_all_zero_pkmaps() and kunmap_atomic(),
      some highmem pages that were mapped in user space end up still cached
      even when they become unmapped.
      
      So, we do have to perform cache maintenance on those unmapped highmem
      pages in the context of DMA when using a VIPT cache.  Unfortunately,
      it is not possible to perform that cache maintenance using physical
      addresses as all the L1 cache maintenance coprocessor functions accept
      virtual addresses only.  Therefore we have no choice but to set up a
      temporary virtual mapping for that purpose.
      
      And of course the explicit cache flushing when unmapping a highmem page
      on a system with a VIPT cache now can go, which should increase
      performance.
      
      While at it, because the code in __flush_dcache_page() has to be modified
      anyway, let's also make sure the mapped highmem pages are pinned with
      kmap_high_get() for the duration of the cache maintenance operation.
      Because kunmap() does unmap highmem pages lazily, it was reported by
      Gary King <GKing@nvidia.com> that those pages ended up being unmapped
      during cache maintenance on SMP causing segmentation faults.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      7e5a69e8
  14. 14 12月, 2009 2 次提交
  15. 04 12月, 2009 5 次提交
  16. 02 12月, 2009 3 次提交
  17. 30 10月, 2009 1 次提交
    • R
      ARM: Fix errata 411920 workarounds · df71dfd4
      Russell King 提交于
      Errata 411920 indicates that any "invalidate entire instruction cache"
      operation can fail if the right conditions are present.  This is not
      limited just to those operations in flush.c, but elsewhere.  Place the
      workaround in the already existing __flush_icache_all() function
      instead.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      df71dfd4
  18. 24 9月, 2009 1 次提交
  19. 02 9月, 2009 1 次提交
    • N
      ARM: 5687/1: fix an oops with highmem · 13f96d8f
      Nicolas Pitre 提交于
      In xdr_partial_copy_from_skb() there is that sequence:
      
      		kaddr = kmap_atomic(*ppage, KM_SKB_SUNRPC_DATA);
      		[...]
      		flush_dcache_page(*ppage);
      		kunmap_atomic(kaddr, KM_SKB_SUNRPC_DATA);
      
      Mixing flush_dcache_page() and kmap_atomic() is a bit odd,
      especially since kunmap_atomic() must deal with cache issues
      already.  OTOH the non-highmem case must use flush_dcache_page()
      as kunmap_atomic() becomes a no op with no cache maintenance.
      
      Problem is that with highmem the implementation of kmap_atomic()
      doesn't set page->virtual, and page_address(page) returns 0 in
      that case. Here flush_dcache_page() calls __flush_dcache_page()
      which calls __cpuc_flush_dcache_page(page_address(page)) resulting
      in a kernel oops.
      
      None of the kmap_atomic() implementations uses set_page_address().
      Hence we can assume page_address() is always expected to return 0 in
      that case. Let's conditionally call __cpuc_flush_dcache_page() only
      when the page address is non zero, and perform that test only when
      highmem is configured.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      13f96d8f
  20. 01 5月, 2009 1 次提交
  21. 16 3月, 2009 1 次提交
    • N
      [ARM] kmap support · d73cd428
      Nicolas Pitre 提交于
      The kmap virtual area borrows a 2MB range at the top of the 16MB area
      below PAGE_OFFSET currently reserved for kernel modules and/or the
      XIP kernel.  This 2MB corresponds to the range covered by 2 consecutive
      second-level page tables, or a single pmd entry as seen by the Linux
      page table abstraction.  Because XIP kernels are unlikely to be seen
      on systems needing highmem support, there shouldn't be any shortage of
      VM space for modules (14 MB for modules is still way more than twice the
      typical usage).
      
      Because the virtual mapping of highmem pages can go away at any moment
      after kunmap() is called on them, we need to bypass the delayed cache
      flushing provided by flush_dcache_page() in that case.
      
      The atomic kmap versions are based on fixmaps, and
      __cpuc_flush_dcache_page() is used directly in that case.
      Signed-off-by: NNicolas Pitre <nico@marvell.com>
      d73cd428
  22. 01 9月, 2008 1 次提交
  23. 03 7月, 2008 1 次提交
  24. 09 1月, 2007 1 次提交
  25. 13 12月, 2006 1 次提交
    • R
      [ARM] Unuse another Linux PTE bit · ad1ae2fe
      Russell King 提交于
      L_PTE_ASID is not really required to be stored in every PTE, since we
      can identify it via the address passed to set_pte_at().  So, create
      set_pte_ext() which takes the address of the PTE to set, the Linux
      PTE value, and the additional CPU PTE bits which aren't encoded in
      the Linux PTE value.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ad1ae2fe
  26. 25 9月, 2006 1 次提交
  27. 20 9月, 2006 1 次提交
  28. 03 9月, 2006 1 次提交
    • G
      [ARM] 3762/1: Fix ptrace cache coherency bug for ARM1136 VIPT nonaliasing Harvard caches · a188ad2b
      George G. Davis 提交于
      Patch from George G. Davis
      
      Resolve ARM1136 VIPT non-aliasing cache coherency issues observed when
      using ptrace to set breakpoints and cleanup copy_{to,from}_user_page()
      while we're here as requested by Russell King because "it's also far
      too heavy on non-v6 CPUs".
      
      NOTES:
      
      1. Only access_process_vm() calls copy_{to,from}_user_page().
      2. access_process_vm() calls get_user_pages() to pin down the "page".
      3. get_user_pages() calls flush_dcache_page(page) which ensures cache
         coherency between kernel and userspace mappings of "page".  However
         flush_dcache_page(page) may not invalidate I-Cache over this range
         for all cases, specifically, I-Cache is not invalidated for the VIPT
         non-aliasing case.  So memory is consistent between kernel and user
         space mappings of "page" but I-Cache may still be hot over this
         range.  IOW, we don't have to worry about flush_cache_page() before
         memcpy().
      4. Now, for the copy_to_user_page() case, after memcpy(), we must flush
         the caches so memory is consistent with kernel cache entries and
         invalidate the I-Cache if this mm region is executable.  We don't
         need to do anything after memcpy() for the copy_from_user_page()
         case since kernel cache entries will be invalidated via the same
         process above if we access "page" again.  The flush_ptrace_access()
         function (borrowed from SPARC64 implementation) is added to handle
         cache flushing after memcpy() for the copy_to_user_page() case.
      Signed-off-by: NGeorge G. Davis <gdavis@mvista.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a188ad2b
  29. 11 3月, 2006 1 次提交