1. 14 1月, 2011 40 次提交
    • A
      thp: khugepaged: make khugepaged aware about madvise · 60ab3244
      Andrea Arcangeli 提交于
      MADV_HUGEPAGE and MADV_NOHUGEPAGE were fully effective only if run after
      mmap and before touching the memory.  While this is enough for most
      usages, it's little effort to make madvise more dynamic at runtime on an
      existing mapping by making khugepaged aware about madvise.
      
      MADV_HUGEPAGE: register in khugepaged immediately without waiting a page
      fault (that may not ever happen if all pages are already mapped and the
      "enabled" knob was set to madvise during the initial page faults).
      
      MADV_NOHUGEPAGE: skip vmas marked VM_NOHUGEPAGE in khugepaged to stop
      collapsing pages where not needed.
      
      [akpm@linux-foundation.org: tweak comment]
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      60ab3244
    • A
      thp: madvise(MADV_NOHUGEPAGE) · a664b2d8
      Andrea Arcangeli 提交于
      Add madvise MADV_NOHUGEPAGE to mark regions that are not important to be
      hugepage backed.  Return -EINVAL if the vma is not of an anonymous type,
      or the feature isn't built into the kernel.  Never silently return
      success.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a664b2d8
    • A
      thp: mm: define MADV_NOHUGEPAGE · 1ddd6db4
      Andrea Arcangeli 提交于
      Define MADV_NOHUGEPAGE.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1ddd6db4
    • A
      thp: compound_trans_order · 37c2ac78
      Andrea Arcangeli 提交于
      Read compound_trans_order safe. Noop for CONFIG_TRANSPARENT_HUGEPAGE=n.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      37c2ac78
    • R
      thp: fix anon memory statistics with transparent hugepages · 2c888cfb
      Rik van Riel 提交于
      Count each transparent hugepage as HPAGE_PMD_NR pages in the LRU
      statistics, so the Active(anon) and Inactive(anon) statistics in
      /proc/meminfo are correct.
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2c888cfb
    • A
      thp: use compaction in kswapd for GFP_ATOMIC order > 0 · 5a03b051
      Andrea Arcangeli 提交于
      This takes advantage of memory compaction to properly generate pages of
      order > 0 if regular page reclaim fails and priority level becomes more
      severe and we don't reach the proper watermarks.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5a03b051
    • A
      thp: mmu_notifier_test_young · 8ee53820
      Andrea Arcangeli 提交于
      For GRU and EPT, we need gup-fast to set referenced bit too (this is why
      it's correct to return 0 when shadow_access_mask is zero, it requires
      gup-fast to set the referenced bit).  qemu-kvm access already sets the
      young bit in the pte if it isn't zero-copy, if it's zero copy or a shadow
      paging EPT minor fault we relay on gup-fast to signal the page is in
      use...
      
      We also need to check the young bits on the secondary pagetables for NPT
      and not nested shadow mmu as the data may never get accessed again by the
      primary pte.
      
      Without this closer accuracy, we'd have to remove the heuristic that
      avoids collapsing hugepages in hugepage virtual regions that have not even
      a single subpage in use.
      
      ->test_young is full backwards compatible with GRU and other usages that
      don't have young bits in pagetables set by the hardware and that should
      nuke the secondary mmu mappings when ->clear_flush_young runs just like
      EPT does.
      
      Removing the heuristic that checks the young bit in
      khugepaged/collapse_huge_page completely isn't so bad either probably but
      I thought it was worth it and this makes it reliable.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8ee53820
    • A
      thp: avoid breaking huge pmd invariants in case of vma_adjust failures · 94fcc585
      Andrea Arcangeli 提交于
      An huge pmd can only be mapped if the corresponding 2M virtual range is
      fully contained in the vma.  At times the VM calls split_vma twice, if the
      first split_vma succeeds and the second fail, the first split_vma remains
      in effect and it's not rolled back.  For split_vma or vma_adjust to fail
      an allocation failure is needed so it's a very unlikely event (the out of
      memory killer would normally fire before any allocation failure is visible
      to kernel and userland and if an out of memory condition happens it's
      unlikely to happen exactly here).  Nevertheless it's safer to ensure that
      no huge pmd can be left around if the vma is adjusted in a way that can't
      fit hugepages anymore at the new vm_start/vm_end address.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      94fcc585
    • A
      thp: add numa awareness to hugepage allocations · 0bbbc0b3
      Andrea Arcangeli 提交于
      It's mostly a matter of replacing alloc_pages with alloc_pages_vma after
      introducing alloc_pages_vma.  khugepaged needs special handling as the
      allocation has to happen inside collapse_huge_page where the vma is known
      and an error has to be returned to the outer loop to sleep
      alloc_sleep_millisecs in case of failure.  But it retains the more
      efficient logic of handling allocation failures in khugepaged in case of
      CONFIG_NUMA=n.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0bbbc0b3
    • J
      thp: mprotect: transparent huge page support · cd7548ab
      Johannes Weiner 提交于
      Natively handle huge pmds when changing page tables on behalf of
      mprotect().
      
      I left out update_mmu_cache() because we do not need it on x86 anyway but
      more importantly the interface works on ptes, not pmds.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd7548ab
    • J
      thp: mincore transparent hugepage support · 0ca1634d
      Johannes Weiner 提交于
      Handle transparent huge page pmd entries natively instead of splitting
      them into subpages.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ca1634d
    • J
      thp: add x86 32bit support · f2d6bfe9
      Johannes Weiner 提交于
      Add support for transparent hugepages to x86 32bit.
      
      Share the same VM_ bitflag for VM_MAPPED_COPY.  mm/nommu.c will never
      support transparent hugepages.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f2d6bfe9
    • A
      thp: remove PG_buddy · 5f24ce5f
      Andrea Arcangeli 提交于
      PG_buddy can be converted to _mapcount == -2.  So the PG_compound_lock can
      be added to page->flags without overflowing (because of the sparse section
      bits increasing) with CONFIG_X86_PAE=y and CONFIG_X86_PAT=y.  This also
      has to move the memory hotplug code from _mapcount to lru.next to avoid
      any risk of clashes.  We can't use lru.next for PG_buddy removal, but
      memory hotplug can use lru.next even more easily than the mapcount
      instead.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f24ce5f
    • A
      thp: khugepaged · ba76149f
      Andrea Arcangeli 提交于
      Add khugepaged to relocate fragmented pages into hugepages if new
      hugepages become available.  (this is indipendent of the defrag logic that
      will have to make new hugepages available)
      
      The fundamental reason why khugepaged is unavoidable, is that some memory
      can be fragmented and not everything can be relocated.  So when a virtual
      machine quits and releases gigabytes of hugepages, we want to use those
      freely available hugepages to create huge-pmd in the other virtual
      machines that may be running on fragmented memory, to maximize the CPU
      efficiency at all times.  The scan is slow, it takes nearly zero cpu time,
      except when it copies data (in which case it means we definitely want to
      pay for that cpu time) so it seems a good tradeoff.
      
      In addition to the hugepages being released by other process releasing
      memory, we have the strong suspicion that the performance impact of
      potentially defragmenting hugepages during or before each page fault could
      lead to more performance inconsistency than allocating small pages at
      first and having them collapsed into large pages later...  if they prove
      themselfs to be long lived mappings (khugepaged scan is slow so short
      lived mappings have low probability to run into khugepaged if compared to
      long lived mappings).
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ba76149f
    • A
      thp: transparent hugepage vmstat · 79134171
      Andrea Arcangeli 提交于
      Add hugepage stat information to /proc/vmstat and /proc/meminfo.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      79134171
    • A
      thp: pmd_trans_huge migrate bugcheck · 500d65d4
      Andrea Arcangeli 提交于
      No pmd_trans_huge should ever materialize in migration ptes areas, because
      we split the hugepage before migration ptes are instantiated.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      500d65d4
    • A
      thp: madvise(MADV_HUGEPAGE) · 0af4e98b
      Andrea Arcangeli 提交于
      Add madvise MADV_HUGEPAGE to mark regions that are important to be
      hugepage backed.  Return -EINVAL if the vma is not of an anonymous type,
      or the feature isn't built into the kernel.  Never silently return
      success.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0af4e98b
    • A
      thp: transparent hugepage core · 71e3aac0
      Andrea Arcangeli 提交于
      Lately I've been working to make KVM use hugepages transparently without
      the usual restrictions of hugetlbfs.  Some of the restrictions I'd like to
      see removed:
      
      1) hugepages have to be swappable or the guest physical memory remains
         locked in RAM and can't be paged out to swap
      
      2) if a hugepage allocation fails, regular pages should be allocated
         instead and mixed in the same vma without any failure and without
         userland noticing
      
      3) if some task quits and more hugepages become available in the
         buddy, guest physical memory backed by regular pages should be
         relocated on hugepages automatically in regions under
         madvise(MADV_HUGEPAGE) (ideally event driven by waking up the
         kernel deamon if the order=HPAGE_PMD_SHIFT-PAGE_SHIFT list becomes
         not null)
      
      4) avoidance of reservation and maximization of use of hugepages whenever
         possible. Reservation (needed to avoid runtime fatal faliures) may be ok for
         1 machine with 1 database with 1 database cache with 1 database cache size
         known at boot time. It's definitely not feasible with a virtualization
         hypervisor usage like RHEV-H that runs an unknown number of virtual machines
         with an unknown size of each virtual machine with an unknown amount of
         pagecache that could be potentially useful in the host for guest not using
         O_DIRECT (aka cache=off).
      
      hugepages in the virtualization hypervisor (and also in the guest!) are
      much more important than in a regular host not using virtualization,
      becasue with NPT/EPT they decrease the tlb-miss cacheline accesses from 24
      to 19 in case only the hypervisor uses transparent hugepages, and they
      decrease the tlb-miss cacheline accesses from 19 to 15 in case both the
      linux hypervisor and the linux guest both uses this patch (though the
      guest will limit the addition speedup to anonymous regions only for
      now...).  Even more important is that the tlb miss handler is much slower
      on a NPT/EPT guest than for a regular shadow paging or no-virtualization
      scenario.  So maximizing the amount of virtual memory cached by the TLB
      pays off significantly more with NPT/EPT than without (even if there would
      be no significant speedup in the tlb-miss runtime).
      
      The first (and more tedious) part of this work requires allowing the VM to
      handle anonymous hugepages mixed with regular pages transparently on
      regular anonymous vmas.  This is what this patch tries to achieve in the
      least intrusive possible way.  We want hugepages and hugetlb to be used in
      a way so that all applications can benefit without changes (as usual we
      leverage the KVM virtualization design: by improving the Linux VM at
      large, KVM gets the performance boost too).
      
      The most important design choice is: always fallback to 4k allocation if
      the hugepage allocation fails!  This is the _very_ opposite of some large
      pagecache patches that failed with -EIO back then if a 64k (or similar)
      allocation failed...
      
      Second important decision (to reduce the impact of the feature on the
      existing pagetable handling code) is that at any time we can split an
      hugepage into 512 regular pages and it has to be done with an operation
      that can't fail.  This way the reliability of the swapping isn't decreased
      (no need to allocate memory when we are short on memory to swap) and it's
      trivial to plug a split_huge_page* one-liner where needed without
      polluting the VM.  Over time we can teach mprotect, mremap and friends to
      handle pmd_trans_huge natively without calling split_huge_page*.  The fact
      it can't fail isn't just for swap: if split_huge_page would return -ENOMEM
      (instead of the current void) we'd need to rollback the mprotect from the
      middle of it (ideally including undoing the split_vma) which would be a
      big change and in the very wrong direction (it'd likely be simpler not to
      call split_huge_page at all and to teach mprotect and friends to handle
      hugepages instead of rolling them back from the middle).  In short the
      very value of split_huge_page is that it can't fail.
      
      The collapsing and madvise(MADV_HUGEPAGE) part will remain separated and
      incremental and it'll just be an "harmless" addition later if this initial
      part is agreed upon.  It also should be noted that locking-wise replacing
      regular pages with hugepages is going to be very easy if compared to what
      I'm doing below in split_huge_page, as it will only happen when
      page_count(page) matches page_mapcount(page) if we can take the PG_lock
      and mmap_sem in write mode.  collapse_huge_page will be a "best effort"
      that (unlike split_huge_page) can fail at the minimal sign of trouble and
      we can try again later.  collapse_huge_page will be similar to how KSM
      works and the madvise(MADV_HUGEPAGE) will work similar to
      madvise(MADV_MERGEABLE).
      
      The default I like is that transparent hugepages are used at page fault
      time.  This can be changed with
      /sys/kernel/mm/transparent_hugepage/enabled.  The control knob can be set
      to three values "always", "madvise", "never" which mean respectively that
      hugepages are always used, or only inside madvise(MADV_HUGEPAGE) regions,
      or never used.  /sys/kernel/mm/transparent_hugepage/defrag instead
      controls if the hugepage allocation should defrag memory aggressively
      "always", only inside "madvise" regions, or "never".
      
      The pmd_trans_splitting/pmd_trans_huge locking is very solid.  The
      put_page (from get_user_page users that can't use mmu notifier like
      O_DIRECT) that runs against a __split_huge_page_refcount instead was a
      pain to serialize in a way that would result always in a coherent page
      count for both tail and head.  I think my locking solution with a
      compound_lock taken only after the page_first is valid and is still a
      PageHead should be safe but it surely needs review from SMP race point of
      view.  In short there is no current existing way to serialize the O_DIRECT
      final put_page against split_huge_page_refcount so I had to invent a new
      one (O_DIRECT loses knowledge on the mapping status by the time gup_fast
      returns so...).  And I didn't want to impact all gup/gup_fast users for
      now, maybe if we change the gup interface substantially we can avoid this
      locking, I admit I didn't think too much about it because changing the gup
      unpinning interface would be invasive.
      
      If we ignored O_DIRECT we could stick to the existing compound refcounting
      code, by simply adding a get_user_pages_fast_flags(foll_flags) where KVM
      (and any other mmu notifier user) would call it without FOLL_GET (and if
      FOLL_GET isn't set we'd just BUG_ON if nobody registered itself in the
      current task mmu notifier list yet).  But O_DIRECT is fundamental for
      decent performance of virtualized I/O on fast storage so we can't avoid it
      to solve the race of put_page against split_huge_page_refcount to achieve
      a complete hugepage feature for KVM.
      
      Swap and oom works fine (well just like with regular pages ;).  MMU
      notifier is handled transparently too, with the exception of the young bit
      on the pmd, that didn't have a range check but I think KVM will be fine
      because the whole point of hugepages is that EPT/NPT will also use a huge
      pmd when they notice gup returns pages with PageCompound set, so they
      won't care of a range and there's just the pmd young bit to check in that
      case.
      
      NOTE: in some cases if the L2 cache is small, this may slowdown and waste
      memory during COWs because 4M of memory are accessed in a single fault
      instead of 8k (the payoff is that after COW the program can run faster).
      So we might want to switch the copy_huge_page (and clear_huge_page too) to
      not temporal stores.  I also extensively researched ways to avoid this
      cache trashing with a full prefault logic that would cow in 8k/16k/32k/64k
      up to 1M (I can send those patches that fully implemented prefault) but I
      concluded they're not worth it and they add an huge additional complexity
      and they remove all tlb benefits until the full hugepage has been faulted
      in, to save a little bit of memory and some cache during app startup, but
      they still don't improve substantially the cache-trashing during startup
      if the prefault happens in >4k chunks.  One reason is that those 4k pte
      entries copied are still mapped on a perfectly cache-colored hugepage, so
      the trashing is the worst one can generate in those copies (cow of 4k page
      copies aren't so well colored so they trashes less, but again this results
      in software running faster after the page fault).  Those prefault patches
      allowed things like a pte where post-cow pages were local 4k regular anon
      pages and the not-yet-cowed pte entries were pointing in the middle of
      some hugepage mapped read-only.  If it doesn't payoff substantially with
      todays hardware it will payoff even less in the future with larger l2
      caches, and the prefault logic would blot the VM a lot.  If one is
      emebdded transparent_hugepage can be disabled during boot with sysfs or
      with the boot commandline parameter transparent_hugepage=0 (or
      transparent_hugepage=2 to restrict hugepages inside madvise regions) that
      will ensure not a single hugepage is allocated at boot time.  It is simple
      enough to just disable transparent hugepage globally and let transparent
      hugepages be allocated selectively by applications in the MADV_HUGEPAGE
      region (both at page fault time, and if enabled with the
      collapse_huge_page too through the kernel daemon).
      
      This patch supports only hugepages mapped in the pmd, archs that have
      smaller hugepages will not fit in this patch alone.  Also some archs like
      power have certain tlb limits that prevents mixing different page size in
      the same regions so they will not fit in this framework that requires
      "graceful fallback" to basic PAGE_SIZE in case of physical memory
      fragmentation.  hugetlbfs remains a perfect fit for those because its
      software limits happen to match the hardware limits.  hugetlbfs also
      remains a perfect fit for hugepage sizes like 1GByte that cannot be hoped
      to be found not fragmented after a certain system uptime and that would be
      very expensive to defragment with relocation, so requiring reservation.
      hugetlbfs is the "reservation way", the point of transparent hugepages is
      not to have any reservation at all and maximizing the use of cache and
      hugepages at all times automatically.
      
      Some performance result:
      
      vmx andrea # LD_PRELOAD=/usr/lib64/libhugetlbfs.so HUGETLB_MORECORE=yes HUGETLB_PATH=/mnt/huge/ ./largep
      ages3
      memset page fault 1566023
      memset tlb miss 453854
      memset second tlb miss 453321
      random access tlb miss 41635
      random access second tlb miss 41658
      vmx andrea # LD_PRELOAD=/usr/lib64/libhugetlbfs.so HUGETLB_MORECORE=yes HUGETLB_PATH=/mnt/huge/ ./largepages3
      memset page fault 1566471
      memset tlb miss 453375
      memset second tlb miss 453320
      random access tlb miss 41636
      random access second tlb miss 41637
      vmx andrea # ./largepages3
      memset page fault 1566642
      memset tlb miss 453417
      memset second tlb miss 453313
      random access tlb miss 41630
      random access second tlb miss 41647
      vmx andrea # ./largepages3
      memset page fault 1566872
      memset tlb miss 453418
      memset second tlb miss 453315
      random access tlb miss 41618
      random access second tlb miss 41659
      vmx andrea # echo 0 > /proc/sys/vm/transparent_hugepage
      vmx andrea # ./largepages3
      memset page fault 2182476
      memset tlb miss 460305
      memset second tlb miss 460179
      random access tlb miss 44483
      random access second tlb miss 44186
      vmx andrea # ./largepages3
      memset page fault 2182791
      memset tlb miss 460742
      memset second tlb miss 459962
      random access tlb miss 43981
      random access second tlb miss 43988
      
      ============
      #include <stdio.h>
      #include <stdlib.h>
      #include <string.h>
      #include <sys/time.h>
      
      #define SIZE (3UL*1024*1024*1024)
      
      int main()
      {
      	char *p = malloc(SIZE), *p2;
      	struct timeval before, after;
      
      	gettimeofday(&before, NULL);
      	memset(p, 0, SIZE);
      	gettimeofday(&after, NULL);
      	printf("memset page fault %Lu\n",
      	       (after.tv_sec-before.tv_sec)*1000000UL +
      	       after.tv_usec-before.tv_usec);
      
      	gettimeofday(&before, NULL);
      	memset(p, 0, SIZE);
      	gettimeofday(&after, NULL);
      	printf("memset tlb miss %Lu\n",
      	       (after.tv_sec-before.tv_sec)*1000000UL +
      	       after.tv_usec-before.tv_usec);
      
      	gettimeofday(&before, NULL);
      	memset(p, 0, SIZE);
      	gettimeofday(&after, NULL);
      	printf("memset second tlb miss %Lu\n",
      	       (after.tv_sec-before.tv_sec)*1000000UL +
      	       after.tv_usec-before.tv_usec);
      
      	gettimeofday(&before, NULL);
      	for (p2 = p; p2 < p+SIZE; p2 += 4096)
      		*p2 = 0;
      	gettimeofday(&after, NULL);
      	printf("random access tlb miss %Lu\n",
      	       (after.tv_sec-before.tv_sec)*1000000UL +
      	       after.tv_usec-before.tv_usec);
      
      	gettimeofday(&before, NULL);
      	for (p2 = p; p2 < p+SIZE; p2 += 4096)
      		*p2 = 0;
      	gettimeofday(&after, NULL);
      	printf("random access second tlb miss %Lu\n",
      	       (after.tv_sec-before.tv_sec)*1000000UL +
      	       after.tv_usec-before.tv_usec);
      
      	return 0;
      }
      ============
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      71e3aac0
    • A
      thp: _GFP_NO_KSWAPD · 32dba98e
      Andrea Arcangeli 提交于
      Transparent hugepage allocations must be allowed not to invoke kswapd or
      any other kind of indirect reclaim (especially when the defrag sysfs is
      control disabled).  It's unacceptable to swap out anonymous pages
      (potentially anonymous transparent hugepages) in order to create new
      transparent hugepages.  This is true for the MADV_HUGEPAGE areas too
      (swapping out a kvm virtual machine and so having it suffer an unbearable
      slowdown, so another one with guest physical memory marked MADV_HUGEPAGE
      can run 30% faster if it is running memory intensive workloads, makes no
      sense).  If a transparent hugepage allocation fails the slowdown is minor
      and there is total fallback, so kswapd should never be asked to swapout
      memory to allow the high order allocation to succeed.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      32dba98e
    • A
      thp: kvm mmu transparent hugepage support · 936a5fe6
      Andrea Arcangeli 提交于
      This should work for both hugetlbfs and transparent hugepages.
      
      [akpm@linux-foundation.org: bring forward PageTransCompound() addition for bisectability]
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      936a5fe6
    • A
      thp: clear_copy_huge_page · 47ad8475
      Andrea Arcangeli 提交于
      Move the copy/clear_huge_page functions to common code to share between
      hugetlb.c and huge_memory.c.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      47ad8475
    • A
      thp: add pmd_huge_pte to mm_struct · e7a00c45
      Andrea Arcangeli 提交于
      This increase the size of the mm struct a bit but it is needed to
      preallocate one pte for each hugepage so that split_huge_page will not
      require a fail path.  Guarantee of success is a fundamental property of
      split_huge_page to avoid decrasing swapping reliability and to avoid
      adding -ENOMEM fail paths that would otherwise force the hugepage-unaware
      VM code to learn rolling back in the middle of its pte mangling operations
      (if something we need it to learn handling pmd_trans_huge natively rather
      being capable of rollback).  When split_huge_page runs a pte is needed to
      succeed the split, to map the newly splitted regular pages with a regular
      pte.  This way all existing VM code remains backwards compatible by just
      adding a split_huge_page* one liner.  The memory waste of those
      preallocated ptes is negligible and so it is worth it.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e7a00c45
    • A
      thp: clear page compound · 4e6af67e
      Andrea Arcangeli 提交于
      split_huge_page must transform a compound page to a regular page and needs
      ClearPageCompound.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e6af67e
    • A
      thp: add pmd mmu_notifier helpers · 91a4ee26
      Andrea Arcangeli 提交于
      Add mmu notifier helpers to handle pmd huge operations.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      91a4ee26
    • A
      thp: pte alloc trans splitting · 8ac1f832
      Andrea Arcangeli 提交于
      pte alloc routines must wait for split_huge_page if the pmd is not present
      and not null (i.e.  pmd_trans_splitting).  The additional branches are
      optimized away at compile time by pmd_trans_splitting if the config option
      is off.  However we must pass the vma down in order to know the anon_vma
      lock to wait for.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8ac1f832
    • A
      thp: add pmd mangling generic functions · e2cda322
      Andrea Arcangeli 提交于
      Some are needed to build but not actually used on archs not supporting
      transparent hugepages.  Others like pmdp_clear_flush are used by x86 too.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e2cda322
    • A
      thp: special pmd_trans_* functions · 5f6e8da7
      Andrea Arcangeli 提交于
      These returns 0 at compile time when the config option is disabled, to
      allow gcc to eliminate the transparent hugepage function calls at compile
      time without additional #ifdefs (only the export of those functions have
      to be visible to gcc but they won't be required at link time and
      huge_memory.o can be not built at all).
      
      _PAGE_BIT_UNUSED1 is never used for pmd, only on pte.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f6e8da7
    • A
      thp: export maybe_mkwrite · 14fd403f
      Andrea Arcangeli 提交于
      huge_memory.c needs it too when it fallbacks in copying hugepages into
      regular fragmented pages if hugepage allocation fails during COW.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      14fd403f
    • A
      thp: alter compound get_page/put_page · 91807063
      Andrea Arcangeli 提交于
      Alter compound get_page/put_page to keep references on subpages too, in
      order to allow __split_huge_page_refcount to split an hugepage even while
      subpages have been pinned by one of the get_user_pages() variants.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      91807063
    • A
      thp: compound_lock · e9da73d6
      Andrea Arcangeli 提交于
      Add a new compound_lock() needed to serialize put_page against
      __split_huge_page_refcount().
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e9da73d6
    • A
      thp: mm: define MADV_HUGEPAGE · a826e422
      Andrea Arcangeli 提交于
      Define MADV_HUGEPAGE.
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a826e422
    • M
      mm: kswapd: stop high-order balancing when any suitable zone is balanced · 99504748
      Mel Gorman 提交于
      Simon Kirby reported the following problem
      
         We're seeing cases on a number of servers where cache never fully
         grows to use all available memory.  Sometimes we see servers with 4 GB
         of memory that never seem to have less than 1.5 GB free, even with a
         constantly-active VM.  In some cases, these servers also swap out while
         this happens, even though they are constantly reading the working set
         into memory.  We have been seeing this happening for a long time; I
         don't think it's anything recent, and it still happens on 2.6.36.
      
      After some debugging work by Simon, Dave Hansen and others, the prevaling
      theory became that kswapd is reclaiming order-3 pages requested by SLUB
      too aggressive about it.
      
      There are two apparent problems here.  On the target machine, there is a
      small Normal zone in comparison to DMA32.  As kswapd tries to balance all
      zones, it would continually try reclaiming for Normal even though DMA32
      was balanced enough for callers.  The second problem is that
      sleeping_prematurely() does not use the same logic as balance_pgdat() when
      deciding whether to sleep or not.  This keeps kswapd artifically awake.
      
      A number of tests were run and the figures from previous postings will
      look very different for a few reasons.  One, the old figures were forcing
      my network card to use GFP_ATOMIC in attempt to replicate Simon's problem.
       Second, I previous specified slub_min_order=3 again in an attempt to
      reproduce Simon's problem.  In this posting, I'm depending on Simon to say
      whether his problem is fixed or not and these figures are to show the
      impact to the ordinary cases.  Finally, the "vmscan" figures are taken
      from /proc/vmstat instead of the tracepoints.  There is less information
      but recording is less disruptive.
      
      The first test of relevance was postmark with a process running in the
      background reading a large amount of anonymous memory in blocks.  The
      objective was to vaguely simulate what was happening on Simon's machine
      and it's memory intensive enough to have kswapd awake.
      
      POSTMARK
                                                  traceonly          kanyzone
      Transactions per second:              156.00 ( 0.00%)   153.00 (-1.96%)
      Data megabytes read per second:        21.51 ( 0.00%)    21.52 ( 0.05%)
      Data megabytes written per second:     29.28 ( 0.00%)    29.11 (-0.58%)
      Files created alone per second:       250.00 ( 0.00%)   416.00 (39.90%)
      Files create/transact per second:      79.00 ( 0.00%)    76.00 (-3.95%)
      Files deleted alone per second:       520.00 ( 0.00%)   420.00 (-23.81%)
      Files delete/transact per second:      79.00 ( 0.00%)    76.00 (-3.95%)
      
      MMTests Statistics: duration
      User/Sys Time Running Test (seconds)         16.58      17.4
      Total Elapsed Time (seconds)                218.48    222.47
      
      VMstat Reclaim Statistics: vmscan
      Direct reclaims                                  0          4
      Direct reclaim pages scanned                     0        203
      Direct reclaim pages reclaimed                   0        184
      Kswapd pages scanned                        326631     322018
      Kswapd pages reclaimed                      312632     309784
      Kswapd low wmark quickly                         1          4
      Kswapd high wmark quickly                      122        475
      Kswapd skip congestion_wait                      1          0
      Pages activated                             700040     705317
      Pages deactivated                           212113     203922
      Pages written                                 9875       6363
      
      Total pages scanned                         326631    322221
      Total pages reclaimed                       312632    309968
      %age total pages scanned/reclaimed          95.71%    96.20%
      %age total pages scanned/written             3.02%     1.97%
      
      proc vmstat: Faults
      Major Faults                                   300       254
      Minor Faults                                645183    660284
      Page ins                                    493588    486704
      Page outs                                  4960088   4986704
      Swap ins                                      1230       661
      Swap outs                                     9869      6355
      
      Performance is mildly affected because kswapd is no longer doing as much
      work and the background memory consumer process is getting in the way.
      Note that kswapd scanned and reclaimed fewer pages as it's less aggressive
      and overall fewer pages were scanned and reclaimed.  Swap in/out is
      particularly reduced again reflecting kswapd throwing out fewer pages.
      
      The slight performance impact is unfortunate here but it looks like a
      direct result of kswapd being less aggressive.  As the bug report is about
      too many pages being freed by kswapd, it may have to be accepted for now.
      
      The second test is a streaming IO benchmark that was previously used by
      Johannes to show regressions in page reclaim.
      
      MICRO
      					 traceonly  kanyzone
      User/Sys Time Running Test (seconds)         29.29     28.87
      Total Elapsed Time (seconds)                492.18    488.79
      
      VMstat Reclaim Statistics: vmscan
      Direct reclaims                               2128       1460
      Direct reclaim pages scanned               2284822    1496067
      Direct reclaim pages reclaimed              148919     110937
      Kswapd pages scanned                      15450014   16202876
      Kswapd pages reclaimed                     8503697    8537897
      Kswapd low wmark quickly                      3100       3397
      Kswapd high wmark quickly                     1860       7243
      Kswapd skip congestion_wait                    708        801
      Pages activated                               9635       9573
      Pages deactivated                             1432       1271
      Pages written                                  223       1130
      
      Total pages scanned                       17734836  17698943
      Total pages reclaimed                      8652616   8648834
      %age total pages scanned/reclaimed          48.79%    48.87%
      %age total pages scanned/written             0.00%     0.01%
      
      proc vmstat: Faults
      Major Faults                                   165       221
      Minor Faults                               9655785   9656506
      Page ins                                      3880      7228
      Page outs                                 37692940  37480076
      Swap ins                                         0        69
      Swap outs                                       19        15
      
      Again fewer pages are scanned and reclaimed as expected and this time the
      test completed faster.  Note that kswapd is hitting its watermarks faster
      (low and high wmark quickly) which I expect is due to kswapd reclaiming
      fewer pages.
      
      I also ran fs-mark, iozone and sysbench but there is nothing interesting
      to report in the figures.  Performance is not significantly changed and
      the reclaim statistics look reasonable.
      
      Tgis patch:
      
      When the allocator enters its slow path, kswapd is woken up to balance the
      node.  It continues working until all zones within the node are balanced.
      For order-0 allocations, this makes perfect sense but for higher orders it
      can have unintended side-effects.  If the zone sizes are imbalanced,
      kswapd may reclaim heavily within a smaller zone discarding an excessive
      number of pages.  The user-visible behaviour is that kswapd is awake and
      reclaiming even though plenty of pages are free from a suitable zone.
      
      This patch alters the "balance" logic for high-order reclaim allowing
      kswapd to stop if any suitable zone becomes balanced to reduce the number
      of pages it reclaims from other zones.  kswapd still tries to ensure that
      order-0 watermarks for all zones are met before sleeping.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NEric B Munson <emunson@mgebm.net>
      Cc: Simon Kirby <sim@hostway.ca>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Shaohua Li <shaohua.li@intel.com>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      99504748
    • S
      mm: remove unlikely() from page_mapping() · e20e8779
      Steven Rostedt 提交于
      page_mapping() has a unlikely that the mapping has PAGE_MAPPING_ANON set.
      But running the annotated branch profiler on a normal desktop system doing
      vairous tasks (xchat, evolution, firefox, distcc), it is not really that
      unlikely that the mapping here will have the PAGE_MAPPING_ANON flag set:
      
       correct incorrect  %        Function                  File              Line
       ------- ---------  -        --------                  ----              ----
      35935762 1270265395  97 page_mapping                   mm.h                 659
      1306198001   143659   0 page_mapping                   mm.h                 657
      203131478   121586   0 page_mapping                   mm.h                 657
       5415491     1116   0 page_mapping                   mm.h                 657
      74899487     1116   0 page_mapping                   mm.h                 657
      203132845      224   0 page_mapping                   mm.h                 659
       5415464       27   0 page_mapping                   mm.h                 659
         13552        0   0 page_mapping                   mm.h                 657
         13552        0   0 page_mapping                   mm.h                 659
        242630        0   0 page_mapping                   mm.h                 657
        242630        0   0 page_mapping                   mm.h                 659
      74899487        0   0 page_mapping                   mm.h                 659
      
      The page_mapping() is a static inline, which is why it shows up multiple
      times.
      
      The unlikely in page_mapping() was correct a total of 1909540379 times and
      incorrect 1270533123 times, with a 39% being incorrect.  With this much of
      an error, it's best to simply remove the unlikely and have the compiler
      and branch prediction figure this out.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e20e8779
    • S
      mm: remove likely() from mapping_unevictable() · 088e5465
      Steven Rostedt 提交于
      The mapping_unevictable() has a likely() around the mapping parameter.
      This mapping parameter comes from page_mapping() which has an unlikely()
      that the page will be set as PAGE_MAPPING_ANON, and if so, it will return
      NULL.  One would think that this unlikely() means that the mapping
      returned by page_mapping() would not be NULL, but where page_mapping() is
      used just above mapping_unevictable(), that unlikely() is incorrect most
      of the time.  This means that the "likely(mapping)" in
      mapping_unevictable() is incorrect most of the time.
      
      Running the annotated branch profiler on my main box which runs firefox,
      evolution, xchat and is part of my distcc farm, I had this:
      
       correct incorrect  %        Function                  File              Line
       ------- ---------  -        --------                  ----              ----
      12872836 1269443893  98 mapping_unevictable            pagemap.h            51
      35935762 1270265395  97 page_mapping                   mm.h                 659
      1306198001   143659   0 page_mapping                   mm.h                 657
      203131478   121586   0 page_mapping                   mm.h                 657
       5415491     1116   0 page_mapping                   mm.h                 657
      74899487     1116   0 page_mapping                   mm.h                 657
      203132845      224   0 page_mapping                   mm.h                 659
       5415464       27   0 page_mapping                   mm.h                 659
         13552        0   0 page_mapping                   mm.h                 657
         13552        0   0 page_mapping                   mm.h                 659
        242630        0   0 page_mapping                   mm.h                 657
        242630        0   0 page_mapping                   mm.h                 659
      74899487        0   0 page_mapping                   mm.h                 659
      
      The page_mapping() is a static inline, which is why it shows up multiple
      times.  The mapping_unevictable() is also a static inline but seems to be
      used only once in my setup.
      
      The unlikely in page_mapping() was correct a total of 1909540379 times and
      incorrect 1270533123 times, with a 39% being incorrect.  Perhaps this is
      enough to remove the unlikely from page_mapping() as well.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: NNick Piggin <npiggin@kernel.dk>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      088e5465
    • M
      mm: add FOLL_MLOCK follow_page flag. · 110d74a9
      Michel Lespinasse 提交于
      Move the code to mlock pages from __mlock_vma_pages_range() to
      follow_page().
      
      This allows __mlock_vma_pages_range() to not have to break down work into
      16-page batches.
      
      An additional motivation for doing this within the present patch series is
      that it'll make it easier for a later chagne to drop mmap_sem when
      blocking on disk (we'd like to be able to resume at the page that was read
      from disk instead of at the start of a 16-page batch).
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      110d74a9
    • R
      mm: clear PageError bit in msync & fsync · 212260aa
      Rik van Riel 提交于
      Temporary IO failures, eg.  due to loss of both multipath paths, can
      permanently leave the PageError bit set on a page, resulting in msync or
      fsync returning -EIO over and over again, even if IO is now getting to the
      disk correctly.
      
      We already clear the AS_ENOSPC and AS_IO bits in mapping->flags in the
      filemap_fdatawait_range function.  Also clearing the PageError bit on the
      page allows subsequent msync or fsync calls on this file to return without
      an error, if the subsequent IO succeeds.
      
      Unfortunately data written out in the msync or fsync call that returned
      -EIO can still get lost, because the page dirty bit appears to not get
      restored on IO error.  However, the alternative could be potentially all
      of memory filling up with uncleanable dirty pages, hanging the system, so
      there is no nice choice here...
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Acked-by: NValerie Aurora <vaurora@redhat.com>
      Acked-by: NJeff Layton <jlayton@redhat.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Acked-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      212260aa
    • M
      oom: allow a non-CAP_SYS_RESOURCE proces to oom_score_adj down · dabb16f6
      Mandeep Singh Baines 提交于
      We'd like to be able to oom_score_adj a process up/down as it
      enters/leaves the foreground.  Currently, it is not possible to oom_adj
      down without CAP_SYS_RESOURCE.  This patch allows a task to decrease its
      oom_score_adj back to the value that a CAP_SYS_RESOURCE thread set it to
      or its inherited value at fork.  Assuming the thread that has forked it
      has oom_score_adj of 0, each process could decrease it back from 0 upon
      activation unless a CAP_SYS_RESOURCE thread elevated it to something
      higher.
      
      Alternative considered:
      
      * a setuid binary
      * a daemon with CAP_SYS_RESOURCE
      
      Since you don't wan't all processes to be able to reduce their oom_adj, a
      setuid or daemon implementation would be complex.  The alternatives also
      have much higher overhead.
      
      This patch updated from original patch based on feedback from David
      Rientjes.
      Signed-off-by: NMandeep Singh Baines <msb@chromium.org>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Ying Han <yinghan@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dabb16f6
    • D
      mm: unify module_alloc code for vmalloc · d0a21265
      David Rientjes 提交于
      Four architectures (arm, mips, sparc, x86) use __vmalloc_area() for
      module_init().  Much of the code is duplicated and can be generalized in a
      globally accessible function, __vmalloc_node_range().
      
      __vmalloc_node() now calls into __vmalloc_node_range() with a range of
      [VMALLOC_START, VMALLOC_END) for functionally equivalent behavior.
      
      Each architecture may then use __vmalloc_node_range() directly to remove
      the duplication of code.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0a21265
    • D
      mm: remove gfp mask from pcpu_get_vm_areas · ec3f64fc
      David Rientjes 提交于
      pcpu_get_vm_areas() only uses GFP_KERNEL allocations, so remove the gfp_t
      formal and use the mask internally.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ec3f64fc
    • D
      mm: remove unused get_vm_area_node · e5a5623b
      David Rientjes 提交于
      get_vm_area_node() is unused in the kernel and can thus be removed.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e5a5623b