1. 09 5月, 2017 1 次提交
    • M
      powerpc/mm/book3s/64: Rework page table geometry for lower memory usage · ba95b5d0
      Michael Ellerman 提交于
      Recently in commit f6eedbba ("powerpc/mm/hash: Increase VA range to 128TB")
      we increased the virtual address space for user processes to 128TB by default,
      and up to 512TB if user space opts in.
      
      This obviously required expanding the range of the Linux page tables. For Book3s
      64-bit using hash and with PAGE_SIZE=64K, we increased the PGD to 2^15 entries.
      This meant we could cover the full address range, while still being able to
      insert a 16G hugepage at the PGD level and a 16M hugepage in the PMD.
      
      The downside of that geometry is that it uses a lot of memory for the PGD, and
      in particular makes the PGD a 4-page allocation, which means it's much more
      likely to fail under memory pressure.
      
      Instead we can make the PMD larger, so that a single PUD entry maps 16G,
      allowing the 16G hugepages to sit at that level in the tree. We're then able to
      split the remaining bits between the PUG and PGD. We make the PGD slightly
      larger as that results in lower memory usage for typical programs.
      
      When THP is enabled the PMD actually doubles in size, to 2^11 entries, or 2^14
      bytes, which is large but still < PAGE_SIZE.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: NBalbir Singh <bsingharora@gmail.com>
      Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      ba95b5d0
  2. 31 3月, 2017 3 次提交
  3. 18 11月, 2016 1 次提交
  4. 11 5月, 2016 2 次提交
  5. 01 5月, 2016 7 次提交
  6. 03 3月, 2016 1 次提交
  7. 29 2月, 2016 2 次提交
  8. 27 2月, 2016 1 次提交
    • P
      powerpc/mm/book3s-64: Free up 7 high-order bits in the Linux PTE · f1a9ae03
      Paul Mackerras 提交于
      This frees up bits 57-63 in the Linux PTE on 64-bit Book 3S machines.
      In the 4k page case, this is done just by reducing the size of the
      RPN field to 39 bits, giving 51-bit real addresses.  In the 64k page
      case, we had 10 unused bits in the middle of the PTE, so this moves
      the RPN field down 10 bits to make use of those unused bits.  This
      means the RPN field is now 3 bits larger at 37 bits, giving 53-bit
      real addresses in the normal case, or 49-bit real addresses for the
      special 4k PFN case.
      
      We are doing this in order to be able to move some other PTE bits
      into the positions where PowerISA V3.0 processors will expect to
      find them in radix-tree mode.  Ultimately we will be able to move
      the RPN field to lower bit positions and make it larger.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f1a9ae03
  9. 16 1月, 2016 1 次提交
  10. 17 12月, 2015 1 次提交
    • L
      powerpc/mm: Add page soft dirty tracking · 7207f436
      Laurent Dufour 提交于
      User space checkpoint and restart tool (CRIU) needs the page's change
      to be soft tracked. This allows to do a pre checkpoint and then dump
      only touched pages.
      
      This is done by using a newly assigned PTE bit (_PAGE_SOFT_DIRTY) when
      the page is backed in memory, and a new _PAGE_SWP_SOFT_DIRTY bit when
      the page is swapped out.
      
      To introduce a new PTE _PAGE_SOFT_DIRTY bit value common to hash 4k
      and hash 64k pte, the bits already defined in hash-*4k.h should be
      shifted left by one.
      
      The _PAGE_SWP_SOFT_DIRTY bit is dynamically put after the swap type in
      the swap pte. A check is added to ensure that the bit is not
      overwritten by _PAGE_HPTEFLAGS.
      Signed-off-by: NLaurent Dufour <ldufour@linux.vnet.ibm.com>
      CC: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7207f436
  11. 14 12月, 2015 10 次提交
  12. 13 8月, 2014 1 次提交
  13. 05 8月, 2014 1 次提交
  14. 14 5月, 2013 1 次提交
  15. 19 11月, 2012 1 次提交
  16. 17 9月, 2012 1 次提交
  17. 10 5月, 2011 1 次提交
  18. 08 12月, 2009 1 次提交
  19. 02 12月, 2009 1 次提交
  20. 27 11月, 2009 1 次提交
  21. 26 6月, 2009 1 次提交
    • B
      powerpc/mm: Fix potential access to freed pages when using hugetlbfs · 6c16a74d
      Benjamin Herrenschmidt 提交于
      When using 64k page sizes, our PTE pages are split in two halves,
      the second half containing the "extension" used to keep track of
      individual 4k pages when not using HW 64k pages.
      
      However, our page tables used for hugetlb have a slightly different
      format and don't carry that "second half".
      
      Our code that batched PTEs to be invalidated unconditionally reads
      the "second half" (to put it into the batch), which means that when
      called to invalidate hugetlb PTEs, it will access unrelated memory.
      
      It breaks when CONFIG_DEBUG_PAGEALLOC is enabled.
      
      This fixes it by only accessing the second half when the _PAGE_COMBO
      bit is set in the first half, which indicates that we are dealing with
      a "combo" page which represents 16x4k subpages. Anything else shouldn't
      have this bit set and thus not require loading from the second half.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      6c16a74d
反馈
建议
客服 返回
顶部