1. 13 12月, 2016 1 次提交
  2. 27 10月, 2016 1 次提交
    • A
      powerpc/mm/radix: Use tlbiel only if we ever ran on the current cpu · bd77c449
      Aneesh Kumar K.V 提交于
      Before this patch, we used tlbiel, if we ever ran only on this core.
      That was mostly derived from the nohash usage of the same. But is
      incorrect, the ISA 3.0 clarifies tlbiel such that:
      
      "All TLB entries that have all of the following properties are made
      invalid on the thread executing the tlbiel instruction"
      
      ie. tlbiel only invalidates TLB entries on the current thread. So if the
      mm has been used on any other thread (aka. cpu) then we must broadcast
      the invalidate.
      
      This bug could lead to invalid TLB entries if a program runs on multiple
      threads of a core.
      
      Hence use tlbiel, if we only ever ran on only the current cpu.
      
      Fixes: 1a472c9d ("powerpc/mm/radix: Add tlbflush routines")
      Cc: stable@vger.kernel.org # v4.7+
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      bd77c449
  3. 01 8月, 2016 1 次提交
  4. 17 11月, 2014 1 次提交
    • W
      mmu_gather: move minimal range calculations into generic code · fb7332a9
      Will Deacon 提交于
      On architectures with hardware broadcasting of TLB invalidation messages
      , it makes sense to reduce the range of the mmu_gather structure when
      unmapping page ranges based on the dirty address information passed to
      tlb_remove_tlb_entry.
      
      arm64 already does this by directly manipulating the start/end fields
      of the gather structure, but this confuses the generic code which
      does not expect these fields to change and can end up calculating
      invalid, negative ranges when forcing a flush in zap_pte_range.
      
      This patch moves the minimal range calculation out of the arm64 code
      and into the generic implementation, simplifying zap_pte_range in the
      process (which no longer needs to care about start/end, since they will
      point to the appropriate ranges already). With the range being tracked
      by core code, the need_flush flag is dropped in favour of checking that
      the end of the range has actually been set.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
      Cc: Michal Simek <monstr@monstr.eu>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fb7332a9
  5. 25 5月, 2011 2 次提交
    • P
      mm, powerpc: move the RCU page-table freeing into generic code · 26723911
      Peter Zijlstra 提交于
      In case other architectures require RCU freed page-tables to implement
      gup_fast() and software filled hashes and similar things, provide the
      means to do so by moving the logic into generic code.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Requested-by: NDavid Miller <davem@davemloft.net>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      26723911
    • P
      powerpc: mmu_gather rework · d6bf29b4
      Peter Zijlstra 提交于
      Fix up powerpc to the new mmu_gather stuff.
      
      PPC has an extra batching queue to RCU free the actual pagetable
      allocations, use the ARCH extentions for that for now.
      
      For the ppc64_tlb_batch, which tracks the vaddrs to unhash from the
      hardware hash-table, keep using per-cpu arrays but flush on context switch
      and use a TLF bit to track the lazy_mmu state.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d6bf29b4
  6. 20 8月, 2009 1 次提交
    • B
      powerpc/mm: Rework & cleanup page table freeing code path · c7cc58a1
      Benjamin Herrenschmidt 提交于
      That patch used to just add a hook to page table flushing but
      pulling that string brought out a whole bunch of issues, so it
      now does that and more:
      
       - We now make the RCU batching of page freeing SMP only, as I
      believe it was intended initially. We make a few more things compile
      to nothing on !CONFIG_SMP
      
       - Some macros are turned into functions, though that forced me to
      out of line a few stuffs due to unsolvable include depenencies,
      however it's probably better that way anyway, it's not -that-
      critical code path.
      
       - 32-bit didn't call pte_free_finish() on tlb_flush() which means
      that it wouldn't push out the batch to RCU for delayed freeing when
      a bunch of page tables have been freed, they would just stay in there
      until the batch gets full.
      
      64-bit BookE will use that hook to maintain the virtually linear
      page tables or the indirect entries in the TLB when using the
      HW loader.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      c7cc58a1
  7. 04 8月, 2008 1 次提交
  8. 03 10月, 2007 1 次提交
    • M
      [POWERPC] Include pagemap.h in asm/powerpc/tlb.h · d4243c17
      Mathieu Desnoyers 提交于
      Fixes this powerpc build error in 2.6.22-rc6-mm1 for powerpc 64 with
      CONFIG_SWAP=n :
      
      In file included from include2/asm/tlb.h:60,
                       from /home/compudj/git/linux-2.6-lttng/arch/powerpc/mm/init_64.
      c:56:
      /home/compudj/git/linux-2.6-lttng/include/asm-generic/tlb.h: In function 'tlb_flush_mmu':
      /home/compudj/git/linux-2.6-lttng/include/asm-generic/tlb.h:76: error: implicit declaration of function 'release_pages'
      /home/compudj/git/linux-2.6-lttng/include/asm-generic/tlb.h: In function 'tlb_remove_page':
      /home/compudj/git/linux-2.6-lttng/include/asm-generic/tlb.h:105: error: implicit declaration of function 'page_cache_release'
      make[2]: *** [arch/powerpc/mm/init_64.o] Error 1
      
      release_pages is declared in linux/pagemap.h, but cannot be included in
      linux/swap.h because of a sparc related comment:
      
      /* only sparc can not include linux/pagemap.h in this file
       * so leave page_cache_release and release_pages undeclared... */
      #define free_page_and_swap_cache(page) \
              page_cache_release(page)
      #define free_pages_and_swap_cache(pages, nr) \
              release_pages((pages), (nr), 0);
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d4243c17
  9. 02 6月, 2007 1 次提交
    • B
      [POWERPC] Fix possible access to free pages · 6ad8d010
      Benjamin Herrenschmidt 提交于
      I think we have a subtle race on ppc64 with the tlb batching.  The
      common code expects tlb_flush() to actually flush any pending TLB
      batch.  It does that because it delays all page freeing until after
      tlb_flush() is called, in order to ensure no stale reference to
      those pages exist in any TLB, thus causing potential access to
      the freed pages.
      
      However, our tlb_flush only triggers the RCU for freeing page
      table pages, it does not currently trigger a flush of a pending
      TLB/hash batch, which is, I think, an error.  This fixes it.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      6ad8d010
  10. 13 4月, 2007 1 次提交
    • B
      [POWERPC] Make tlb flush batch use lazy MMU mode · a741e679
      Benjamin Herrenschmidt 提交于
      The current tlb flush code on powerpc 64 bits has a subtle race since we
      lost the page table lock due to the possible faulting in of new PTEs
      after a previous one has been removed but before the corresponding hash
      entry has been evicted, which can leads to all sort of fatal problems.
      
      This patch reworks the batch code completely. It doesn't use the mmu_gather
      stuff anymore. Instead, we use the lazy mmu hooks that were added by the
      paravirt code. They have the nice property that the enter/leave lazy mmu
      mode pair is always fully contained by the PTE lock for a given range
      of PTEs. Thus we can guarantee that all batches are flushed on a given
      CPU before it drops that lock.
      
      We also generalize batching for any PTE update that require a flush.
      
      Batching is now enabled on a CPU by arch_enter_lazy_mmu_mode() and
      disabled by arch_leave_lazy_mmu_mode(). The code epects that this is
      always contained within a PTE lock section so no preemption can happen
      and no PTE insertion in that range from another CPU. When batching
      is enabled on a CPU, every PTE updates that need a hash flush will
      use the batch for that flush.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a741e679
  11. 26 4月, 2006 1 次提交
  12. 09 1月, 2006 1 次提交
  13. 04 11月, 2005 1 次提交
  14. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4