• A
    thp: khugepaged · ba76149f
    Andrea Arcangeli 提交于
    Add khugepaged to relocate fragmented pages into hugepages if new
    hugepages become available.  (this is indipendent of the defrag logic that
    will have to make new hugepages available)
    
    The fundamental reason why khugepaged is unavoidable, is that some memory
    can be fragmented and not everything can be relocated.  So when a virtual
    machine quits and releases gigabytes of hugepages, we want to use those
    freely available hugepages to create huge-pmd in the other virtual
    machines that may be running on fragmented memory, to maximize the CPU
    efficiency at all times.  The scan is slow, it takes nearly zero cpu time,
    except when it copies data (in which case it means we definitely want to
    pay for that cpu time) so it seems a good tradeoff.
    
    In addition to the hugepages being released by other process releasing
    memory, we have the strong suspicion that the performance impact of
    potentially defragmenting hugepages during or before each page fault could
    lead to more performance inconsistency than allocating small pages at
    first and having them collapsed into large pages later...  if they prove
    themselfs to be long lived mappings (khugepaged scan is slow so short
    lived mappings have low probability to run into khugepaged if compared to
    long lived mappings).
    Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
    Acked-by: NRik van Riel <riel@redhat.com>
    Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
    ba76149f
fork.c 41.9 KB