1. 14 12月, 2015 2 次提交
  2. 12 10月, 2015 1 次提交
    • A
      powerpc/mm: Differentiate between hugetlb and THP during page walk · 891121e6
      Aneesh Kumar K.V 提交于
      We need to properly identify whether a hugepage is an explicit or
      a transparent hugepage in follow_huge_addr(). We used to depend
      on hugepage shift argument to do that. But in some case that can
      result in wrong results. For ex:
      
      On finding a transparent hugepage we set hugepage shift to PMD_SHIFT.
      But we can end up clearing the thp pte, via pmdp_huge_get_and_clear.
      We do prevent reusing the pfn page via the usage of
      kick_all_cpus_sync(). But that happens after we updated the pte to 0.
      Hence in follow_huge_addr() we can find hugepage shift set, but transparent
      huge page check fail for a thp pte.
      
      NOTE: We fixed a variant of this race against thp split in commit
      691e95fd
      ("powerpc/mm/thp: Make page table walk safe against thp split/collapse")
      
      Without this patch, we may hit the BUG_ON(flags & FOLL_GET) in
      follow_page_mask occasionally.
      
      In the long term, we may want to switch ppc64 64k page size config to
      enable CONFIG_ARCH_WANT_GENERAL_HUGETLB
      Reported-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      891121e6
  3. 18 8月, 2015 2 次提交
    • M
      powerpc/mm: Drop the 64K on 4K version of pte_pagesize_index() · 95300577
      Michael Ellerman 提交于
      Now that support for 64k pages with a 4K kernel is removed, this code is
      unreachable.
      
      CONFIG_PPC_HAS_HASH_64K can only be true when CONFIG_PPC_64K_PAGES is
      also true.
      
      But when CONFIG_PPC_64K_PAGES is true we include pte-hash64.h which
      includes pte-hash64-64k.h, which defines both pte_pagesize_index() and
      crucially __real_pte, which means this definition can never be used.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      95300577
    • M
      powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash · 74b5037b
      Michael Ellerman 提交于
      The powerpc kernel can be built to have either a 4K PAGE_SIZE or a 64K
      PAGE_SIZE.
      
      However when built with a 4K PAGE_SIZE there is an additional config
      option which can be enabled, PPC_HAS_HASH_64K, which means the kernel
      also knows how to hash a 64K page even though the base PAGE_SIZE is 4K.
      
      This is used in one obscure configuration, to support 64K pages for SPU
      local store on the Cell processor when the rest of the kernel is using
      4K pages.
      
      In this configuration, pte_pagesize_index() is defined to just pass
      through its arguments to get_slice_psize(). However pte_pagesize_index()
      is called for both user and kernel addresses, whereas get_slice_psize()
      only knows how to handle user addresses.
      
      This has been broken forever, however until recently it happened to
      work. That was because in get_slice_psize() the large kernel address
      would cause the right shift of the slice mask to return zero.
      
      However in commit 7aa0727f ("powerpc/mm: Increase the slice range to
      64TB"), the get_slice_psize() code was changed so that instead of a
      right shift we do an array lookup based on the address. When passed a
      kernel address this means we index way off the end of the slice array
      and return random junk.
      
      That is only fatal if we happen to hit something non-zero, but when we
      do return a non-zero value we confuse the MMU code and eventually cause
      a check stop.
      
      This fix is ugly, but simple. When we're called for a kernel address we
      return 4K, which is always correct in this configuration, otherwise we
      use the slice mask.
      
      Fixes: 7aa0727f ("powerpc/mm: Increase the slice range to 64TB")
      Reported-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      74b5037b
  4. 25 6月, 2015 3 次提交
  5. 19 6月, 2015 1 次提交
  6. 11 5月, 2015 1 次提交
  7. 17 2月, 2015 1 次提交
  8. 12 2月, 2015 1 次提交
  9. 11 12月, 2014 1 次提交
  10. 14 11月, 2014 2 次提交
  11. 02 10月, 2014 1 次提交
  12. 13 8月, 2014 1 次提交
    • A
      powerpc/thp: Handle combo pages in invalidate · fc047955
      Aneesh Kumar K.V 提交于
      If we changed base page size of the segment, either via sub_page_protect
      or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash
      table entries. We do a lazy hash page table flush for all mapped pages
      in the demoted segment. This happens when we handle hash page fault for
      these pages.
      
      We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a
      pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte,
      that implies that we could possibly have older 64K hash pte entries in
      the hash page table and we need to invalidate those entries.
      
      Use _PAGE_COMBO to determine the page size with which we should
      invalidate the hash table entries on unmap.
      
      CC: <stable@vger.kernel.org>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fc047955
  13. 17 2月, 2014 1 次提交
  14. 29 1月, 2014 1 次提交
  15. 15 1月, 2014 1 次提交
    • A
      powerpc/thp: Fix crash on mremap · b3084f4d
      Aneesh Kumar K.V 提交于
      This patch fix the below crash
      
      NIP [c00000000004cee4] .__hash_page_thp+0x2a4/0x440
      LR [c0000000000439ac] .hash_page+0x18c/0x5e0
      ...
      Call Trace:
      [c000000736103c40] [00001ffffb000000] 0x1ffffb000000(unreliable)
      [437908.479693] [c000000736103d50] [c0000000000439ac] .hash_page+0x18c/0x5e0
      [437908.479699] [c000000736103e30] [c00000000000924c] .do_hash_page+0x4c/0x58
      
      On ppc64 we use the pgtable for storing the hpte slot information and
      store address to the pgtable at a constant offset (PTRS_PER_PMD) from
      pmd. On mremap, when we switch the pmd, we need to withdraw and deposit
      the pgtable again, so that we find the pgtable at PTRS_PER_PMD offset
      from new pmd.
      
      We also want to move the withdraw and deposit before the set_pmd so
      that, when page fault find the pmd as trans huge we can be sure that
      pgtable can be located at the offset.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b3084f4d
  16. 11 10月, 2013 1 次提交
    • A
      powerpc: Prepare to support kernel handling of IOMMU map/unmap · 8e0861fa
      Alexey Kardashevskiy 提交于
      The current VFIO-on-POWER implementation supports only user mode
      driven mapping, i.e. QEMU is sending requests to map/unmap pages.
      However this approach is really slow, so we want to move that to KVM.
      Since H_PUT_TCE can be extremely performance sensitive (especially with
      network adapters where each packet needs to be mapped/unmapped) we chose
      to implement that as a "fast" hypercall directly in "real
      mode" (processor still in the guest context but MMU off).
      
      To be able to do that, we need to provide some facilities to
      access the struct page count within that real mode environment as things
      like the sparsemem vmemmap mappings aren't accessible.
      
      This adds an API function realmode_pfn_to_page() to get page struct when
      MMU is off.
      
      This adds to MM a new function put_page_unless_one() which drops a page
      if counter is bigger than 1. It is going to be used when MMU is off
      (for example, real mode on PPC64) and we want to make sure that page
      release will not happen in real mode as it may crash the kernel in
      a horrible way.
      
      CONFIG_SPARSEMEM_VMEMMAP and CONFIG_FLATMEM are supported.
      
      Cc: linux-mm@kvack.org
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Reviewed-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8e0861fa
  17. 21 6月, 2013 5 次提交
  18. 30 4月, 2013 1 次提交
  19. 17 9月, 2012 2 次提交
  20. 29 6月, 2011 1 次提交
  21. 19 5月, 2011 1 次提交
  22. 27 10月, 2010 1 次提交
  23. 30 10月, 2009 2 次提交
    • D
      powerpc/mm: Allow more flexible layouts for hugepage pagetables · a4fe3ce7
      David Gibson 提交于
      Currently each available hugepage size uses a slightly different
      pagetable layout: that is, the bottem level table of pointers to
      hugepages is a different size, and may branch off from the normal page
      tables at a different level.  Every hugepage aware path that needs to
      walk the pagetables must therefore look up the hugepage size from the
      slice info first, and work out the correct way to walk the pagetables
      accordingly.  Future hardware is likely to add more possible hugepage
      sizes, more layout options and more mess.
      
      This patch, therefore reworks the handling of hugepage pagetables to
      reduce this complexity.  In the new scheme, instead of having to
      consult the slice mask, pagetable walking code can check a flag in the
      PGD/PUD/PMD entries to see where to branch off to hugepage pagetables,
      and the entry also contains the information (eseentially hugepage
      shift) necessary to then interpret that table without recourse to the
      slice mask.  This scheme can be extended neatly to handle multiple
      levels of self-describing "special" hugepage pagetables, although for
      now we assume only one level exists.
      
      This approach means that only the pagetable allocation path needs to
      know how the pagetables should be set out.  All other (hugepage)
      pagetable walking paths can just interpret the structure as they go.
      
      There already was a flag bit in PGD/PUD/PMD entries for hugepage
      directory pointers, but it was only used for debug.  We alter that
      flag bit to instead be a 0 in the MSB to indicate a hugepage pagetable
      pointer (normally it would be 1 since the pointer lies in the linear
      mapping).  This means that asm pagetable walking can test for (and
      punt on) hugepage pointers with the same test that checks for
      unpopulated page directory entries (beq becomes bge), since hugepage
      pointers will always be positive, and normal pointers always negative.
      
      While we're at it, we get rid of the confusing (and grep defeating)
      #defining of hugepte_shift to be the same thing as mmu_huge_psizes.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a4fe3ce7
    • D
      powerpc/mm: Cleanup management of kmem_caches for pagetables · a0668cdc
      David Gibson 提交于
      Currently we have a fair bit of rather fiddly code to manage the
      various kmem_caches used to store page tables of various levels.  We
      generally have two caches holding some combination of PGD, PUD and PMD
      tables, plus several more for the special hugepage pagetables.
      
      This patch cleans this all up by taking a different approach.  Rather
      than the caches being designated as for PUDs or for hugeptes for 16M
      pages, the caches are simply allocated to be a specific size.  Thus
      sharing of caches between different types/levels of pagetables happens
      naturally.  The pagetable size, where needed, is passed around encoded
      in the same way as {PGD,PUD,PMD}_INDEX_SIZE; that is n where the
      pagetable contains 2^n pointers.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a0668cdc
  24. 27 8月, 2009 1 次提交
    • B
      powerpc/mm: Cleanup handling of execute permission · ea3cc330
      Benjamin Herrenschmidt 提交于
      This is an attempt at cleaning up a bit the way we handle execute
      permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only
      defined by CPUs that can do something with it, and the myriad of
      #ifdef's in the I$/D$ coherency code is reduced to 2 cases that
      hopefully should cover everything.
      
      The logic on BookE is a little bit different than what it was though
      not by much. Since now, _PAGE_EXEC will be set by the generic code
      for executable pages, we need to filter out if they are unclean and
      recover it. However, I don't expect the code to be more bloated than
      it already was in that area due to that change.
      
      I could boast that this brings proper enforcing of per-page execute
      permissions to all BookE and 40x but in fact, we've had that now for
      some time as a side effect of my previous rework in that area (and
      I didn't even know it :-) We would only enable execute permission if
      the page was cache clean and we would only cache clean it if we took
      and exec fault. Since we now enforce that the later only work if
      VM_EXEC is part of the VMA flags, we de-fact already enforce per-page
      execute permissions... Unless I missed something
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ea3cc330
  25. 20 8月, 2009 2 次提交
  26. 09 6月, 2009 1 次提交
  27. 24 3月, 2009 2 次提交