1. 09 2月, 2017 1 次提交
  2. 04 2月, 2017 6 次提交
    • M
      mm, fs: check for fatal signals in do_generic_file_read() · 5abf186a
      Michal Hocko 提交于
      do_generic_file_read() can be told to perform a large request from
      userspace.  If the system is under OOM and the reading task is the OOM
      victim then it has an access to memory reserves and finishing the full
      request can lead to the full memory depletion which is dangerous.  Make
      sure we rather go with a short read and allow the killed task to
      terminate.
      
      Link: http://lkml.kernel.org/r/20170201092706.9966-3-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5abf186a
    • T
      base/memory, hotplug: fix a kernel oops in show_valid_zones() · a96dfddb
      Toshi Kani 提交于
      Reading a sysfs "memoryN/valid_zones" file leads to the following oops
      when the first page of a range is not backed by struct page.
      show_valid_zones() assumes that 'start_pfn' is always valid for
      page_zone().
      
       BUG: unable to handle kernel paging request at ffffea017a000000
       IP: show_valid_zones+0x6f/0x160
      
      This issue may happen on x86-64 systems with 64GiB or more memory since
      their memory block size is bumped up to 2GiB.  [1] An example of such
      systems is desribed below.  0x3240000000 is only aligned by 1GiB and
      this memory block starts from 0x3200000000, which is not backed by
      struct page.
      
       BIOS-e820: [mem 0x0000003240000000-0x000000603fffffff] usable
      
      Since test_pages_in_a_zone() already checks holes, fix this issue by
      extending this function to return 'valid_start' and 'valid_end' for a
      given range.  show_valid_zones() then proceeds with the valid range.
      
      [1] 'Commit bdee237c ("x86: mm: Use 2GB memory block size on
          large-memory x86-64 systems")'
      
      Link: http://lkml.kernel.org/r/20170127222149.30893-3-toshi.kani@hpe.comSigned-off-by: NToshi Kani <toshi.kani@hpe.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Zhang Zhen <zhenzhang.zhang@huawei.com>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: <stable@vger.kernel.org>	[4.4+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a96dfddb
    • T
      mm/memory_hotplug.c: check start_pfn in test_pages_in_a_zone() · deb88a2a
      Toshi Kani 提交于
      Patch series "fix a kernel oops when reading sysfs valid_zones", v2.
      
      A sysfs memory file is created for each 2GiB memory block on x86-64 when
      the system has 64GiB or more memory.  [1] When the start address of a
      memory block is not backed by struct page, i.e.  a memory range is not
      aligned by 2GiB, reading its 'valid_zones' attribute file leads to a
      kernel oops.  This issue was observed on multiple x86-64 systems with
      more than 64GiB of memory.  This patch-set fixes this issue.
      
      Patch 1 first fixes an issue in test_pages_in_a_zone(), which does not
      test the start section.
      
      Patch 2 then fixes the kernel oops by extending test_pages_in_a_zone()
      to return valid [start, end).
      
      Note for stable kernels: The memory block size change was made by commit
      bdee237c ("x86: mm: Use 2GB memory block size on large-memory x86-64
      systems"), which was accepted to 3.9.  However, this patch-set depends
      on (and fixes) the change to test_pages_in_a_zone() made by commit
      5f0f2887 ("mm/memory_hotplug.c: check for missing sections in
      test_pages_in_a_zone()"), which was accepted to 4.4.
      
      So, I recommend that we backport it up to 4.4.
      
      [1] 'Commit bdee237c ("x86: mm: Use 2GB memory block size on
          large-memory x86-64 systems")'
      
      This patch (of 2):
      
      test_pages_in_a_zone() does not check 'start_pfn' when it is aligned by
      section since 'sec_end_pfn' is set equal to 'pfn'.  Since this function
      is called for testing the range of a sysfs memory file, 'start_pfn' is
      always aligned by section.
      
      Fix it by properly setting 'sec_end_pfn' to the next section pfn.
      
      Also make sure that this function returns 1 only when the range belongs
      to a zone.
      
      Link: http://lkml.kernel.org/r/20170127222149.30893-2-toshi.kani@hpe.comSigned-off-by: NToshi Kani <toshi.kani@hpe.com>
      Cc: Andrew Banman <abanman@sgi.com>
      Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
      Cc: Greg KH <greg@kroah.com>
      Cc: <stable@vger.kernel.org>	[4.4+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      deb88a2a
    • K
      shmem: fix sleeping from atomic context · 253fd0f0
      Kirill A. Shutemov 提交于
      Syzkaller fuzzer managed to trigger this:
      
          BUG: sleeping function called from invalid context at mm/shmem.c:852
          in_atomic(): 1, irqs_disabled(): 0, pid: 529, name: khugepaged
          3 locks held by khugepaged/529:
           #0:  (shrinker_rwsem){++++..}, at: [<ffffffff818d7ef1>] shrink_slab.part.59+0x121/0xd30 mm/vmscan.c:451
           #1:  (&type->s_umount_key#29){++++..}, at: [<ffffffff81a63630>] trylock_super+0x20/0x100 fs/super.c:392
           #2:  (&(&sbinfo->shrinklist_lock)->rlock){+.+.-.}, at: [<ffffffff818fd83e>] spin_lock include/linux/spinlock.h:302 [inline]
           #2:  (&(&sbinfo->shrinklist_lock)->rlock){+.+.-.}, at: [<ffffffff818fd83e>] shmem_unused_huge_shrink+0x28e/0x1490 mm/shmem.c:427
          CPU: 2 PID: 529 Comm: khugepaged Not tainted 4.10.0-rc5+ #201
          Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
          Call Trace:
             shmem_undo_range+0xb20/0x2710 mm/shmem.c:852
             shmem_truncate_range+0x27/0xa0 mm/shmem.c:939
             shmem_evict_inode+0x35f/0xca0 mm/shmem.c:1030
             evict+0x46e/0x980 fs/inode.c:553
             iput_final fs/inode.c:1515 [inline]
             iput+0x589/0xb20 fs/inode.c:1542
             shmem_unused_huge_shrink+0xbad/0x1490 mm/shmem.c:446
             shmem_unused_huge_scan+0x10c/0x170 mm/shmem.c:512
             super_cache_scan+0x376/0x450 fs/super.c:106
             do_shrink_slab mm/vmscan.c:378 [inline]
             shrink_slab.part.59+0x543/0xd30 mm/vmscan.c:481
             shrink_slab mm/vmscan.c:2592 [inline]
             shrink_node+0x2c7/0x870 mm/vmscan.c:2592
             shrink_zones mm/vmscan.c:2734 [inline]
             do_try_to_free_pages+0x369/0xc80 mm/vmscan.c:2776
             try_to_free_pages+0x3c6/0x900 mm/vmscan.c:2982
             __perform_reclaim mm/page_alloc.c:3301 [inline]
             __alloc_pages_direct_reclaim mm/page_alloc.c:3322 [inline]
             __alloc_pages_slowpath+0xa24/0x1c30 mm/page_alloc.c:3683
             __alloc_pages_nodemask+0x544/0xae0 mm/page_alloc.c:3848
             __alloc_pages include/linux/gfp.h:426 [inline]
             __alloc_pages_node include/linux/gfp.h:439 [inline]
             khugepaged_alloc_page+0xc2/0x1b0 mm/khugepaged.c:750
             collapse_huge_page+0x182/0x1fe0 mm/khugepaged.c:955
             khugepaged_scan_pmd+0xfdf/0x12a0 mm/khugepaged.c:1208
             khugepaged_scan_mm_slot mm/khugepaged.c:1727 [inline]
             khugepaged_do_scan mm/khugepaged.c:1808 [inline]
             khugepaged+0xe9b/0x1590 mm/khugepaged.c:1853
             kthread+0x326/0x3f0 kernel/kthread.c:227
             ret_from_fork+0x31/0x40 arch/x86/entry/entry_64.S:430
      
      The iput() from atomic context was a bad idea: if after igrab() somebody
      else calls iput() and we left with the last inode reference, our iput()
      would lead to inode eviction and therefore sleeping.
      
      This patch should fix the situation.
      
      Link: http://lkml.kernel.org/r/20170131093141.GA15899@node.shutemov.nameSigned-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reported-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      253fd0f0
    • P
      kasan: respect /proc/sys/kernel/traceoff_on_warning · 4f40c6e5
      Peter Zijlstra 提交于
      After much waiting I finally reproduced a KASAN issue, only to find my
      trace-buffer empty of useful information because it got spooled out :/
      
      Make kasan_report honour the /proc/sys/kernel/traceoff_on_warning
      interface.
      
      Link: http://lkml.kernel.org/r/20170125164106.3514-1-aryabinin@virtuozzo.comSigned-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Acked-by: NAlexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f40c6e5
    • D
      zswap: disable changing params if init fails · d7b028f5
      Dan Streetman 提交于
      Add zswap_init_failed bool that prevents changing any of the module
      params, if init_zswap() fails, and set zswap_enabled to false.  Change
      'enabled' param to a callback, and check zswap_init_failed before
      allowing any change to 'enabled', 'zpool', or 'compressor' params.
      
      Any driver that is built-in to the kernel will not be unloaded if its
      init function returns error, and its module params remain accessible for
      users to change via sysfs.  Since zswap uses param callbacks, which
      assume that zswap has been initialized, changing the zswap params after
      a failed initialization will result in WARNING due to the param
      callbacks expecting a pool to already exist.  This prevents that by
      immediately exiting any of the param callbacks if initialization failed.
      
      This was reported here:
        https://marc.info/?l=linux-mm&m=147004228125528&w=4
      
      And fixes this WARNING:
        [  429.723476] WARNING: CPU: 0 PID: 5140 at mm/zswap.c:503 __zswap_pool_current+0x56/0x60
      
      The warning is just noise, and not serious.  However, when init fails,
      zswap frees all its percpu dstmem pages and its kmem cache.  The kmem
      cache might be serious, if kmem_cache_alloc(NULL, gfp) has problems; but
      the percpu dstmem pages are definitely a problem, as they're used as
      temporary buffer for compressed pages before copying into place in the
      zpool.
      
      If the user does get zswap enabled after an init failure, then zswap
      will likely Oops on the first page it tries to compress (or worse, start
      corrupting memory).
      
      Fixes: 90b0fc26 ("zswap: change zpool/compressor at runtime")
      Link: http://lkml.kernel.org/r/20170124200259.16191-2-ddstreet@ieee.orgSigned-off-by: NDan Streetman <dan.streetman@canonical.com>
      Reported-by: NMarcin Miroslaw <marcin@mejor.pl>
      Cc: Seth Jennings <sjenning@redhat.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d7b028f5
  3. 25 1月, 2017 10 次提交
  4. 11 1月, 2017 14 次提交
  5. 08 1月, 2017 2 次提交
    • J
      mm: workingset: fix use-after-free in shadow node shrinker · ea07b862
      Johannes Weiner 提交于
      Several people report seeing warnings about inconsistent radix tree
      nodes followed by crashes in the workingset code, which all looked like
      use-after-free access from the shadow node shrinker.
      
      Dave Jones managed to reproduce the issue with a debug patch applied,
      which confirmed that the radix tree shrinking indeed frees shadow nodes
      while they are still linked to the shadow LRU:
      
        WARNING: CPU: 2 PID: 53 at lib/radix-tree.c:643 delete_node+0x1e4/0x200
        CPU: 2 PID: 53 Comm: kswapd0 Not tainted 4.10.0-rc2-think+ #3
        Call Trace:
           delete_node+0x1e4/0x200
           __radix_tree_delete_node+0xd/0x10
           shadow_lru_isolate+0xe6/0x220
           __list_lru_walk_one.isra.4+0x9b/0x190
           list_lru_walk_one+0x23/0x30
           scan_shadow_nodes+0x2e/0x40
           shrink_slab.part.44+0x23d/0x5d0
           shrink_node+0x22c/0x330
           kswapd+0x392/0x8f0
      
      This is the WARN_ON_ONCE(!list_empty(&node->private_list)) placed in the
      inlined radix_tree_shrink().
      
      The problem is with 14b46879 ("mm: workingset: move shadow entry
      tracking to radix tree exceptional tracking"), which passes an update
      callback into the radix tree to link and unlink shadow leaf nodes when
      tree entries change, but forgot to pass the callback when reclaiming a
      shadow node.
      
      While the reclaimed shadow node itself is unlinked by the shrinker, its
      deletion from the tree can cause the left-most leaf node in the tree to
      be shrunk.  If that happens to be a shadow node as well, we don't unlink
      it from the LRU as we should.
      
      Consider this tree, where the s are shadow entries:
      
             root->rnode
                  |
             [0       n]
              |       |
           [s    ] [sssss]
      
      Now the shadow node shrinker reclaims the rightmost leaf node through
      the shadow node LRU:
      
             root->rnode
                  |
             [0        ]
              |
          [s     ]
      
      Because the parent of the deleted node is the first level below the
      root and has only one child in the left-most slot, the intermediate
      level is shrunk and the node containing the single shadow is put in
      its place:
      
             root->rnode
                  |
             [s        ]
      
      The shrinker again sees a single left-most slot in a first level node
      and thus decides to store the shadow in root->rnode directly and free
      the node - which is a leaf node on the shadow node LRU.
      
        root->rnode
             |
             s
      
      Without the update callback, the freed node remains on the shadow LRU,
      where it causes later shrinker runs to crash.
      
      Pass the node updater callback into __radix_tree_delete_node() in case
      the deletion causes the left-most branch in the tree to collapse too.
      
      Also add warnings when linked nodes are freed right away, rather than
      wait for the use-after-free when the list is scanned much later.
      
      Fixes: 14b46879 ("mm: workingset: move shadow entry tracking to radix tree exceptional tracking")
      Reported-by: NDave Chinner <david@fromorbit.com>
      Reported-by: NHugh Dickins <hughd@google.com>
      Reported-by: NAndrea Arcangeli <aarcange@redhat.com>
      Reported-and-tested-by: NDave Jones <davej@codemonkey.org.uk>
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Chris Leech <cleech@redhat.com>
      Cc: Lee Duncan <lduncan@suse.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Matthew Wilcox <mawilcox@linuxonhyperv.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea07b862
    • H
      mm: stop leaking PageTables · b0b9b3df
      Hugh Dickins 提交于
      4.10-rc loadtest (even on x86, and even without THPCache) fails with
      "fork: Cannot allocate memory" or some such; and /proc/meminfo shows
      PageTables growing.
      
      Commit 953c66c2 ("mm: THP page cache support for ppc64") that got
      merged in rc1 removed the freeing of an unused preallocated pagetable
      after do_fault_around() has called map_pages().
      
      This is usually a good optimization, so that the followup doesn't have
      to reallocate one; but it's not sufficient to shift the freeing into
      alloc_set_pte(), since there are failure cases (most commonly
      VM_FAULT_RETRY) which never reach finish_fault().
      
      Check and free it at the outer level in do_fault(), then we don't need
      to worry in alloc_set_pte(), and can restore that to how it was (I
      cannot find any reason to pte_free() under lock as it was doing).
      
      And fix a separate pagetable leak, or crash, introduced by the same
      change, that could only show up on some ppc64: why does do_set_pmd()'s
      failure case attempt to withdraw a pagetable when it never deposited
      one, at the same time overwriting (so leaking) the vmf->prealloc_pte?
      Residue of an earlier implementation, perhaps? Delete it.
      
      Fixes: 953c66c2 ("mm: THP page cache support for ppc64")
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0b9b3df
  6. 30 12月, 2016 2 次提交
    • O
      mm/filemap: fix parameters to test_bit() · 98473f9f
      Olof Johansson 提交于
       mm/filemap.c: In function 'clear_bit_unlock_is_negative_byte':
        mm/filemap.c:933:9: error: too few arguments to function 'test_bit'
          return test_bit(PG_waiters);
               ^~~~~~~~
      
      Fixes: b91e1302 ('mm: optimize PageWaiters bit use for unlock_page()')
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      Brown-paper-bag-by: NLinus Torvalds <dummy@duh.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      98473f9f
    • L
      mm: optimize PageWaiters bit use for unlock_page() · b91e1302
      Linus Torvalds 提交于
      In commit 62906027 ("mm: add PageWaiters indicating tasks are
      waiting for a page bit") Nick Piggin made our page locking no longer
      unconditionally touch the hashed page waitqueue, which not only helps
      performance in general, but is particularly helpful on NUMA machines
      where the hashed wait queues can bounce around a lot.
      
      However, the "clear lock bit atomically and then test the waiters bit"
      sequence turns out to be much more expensive than it needs to be,
      because you get a nasty stall when trying to access the same word that
      just got updated atomically.
      
      On architectures where locking is done with LL/SC, this would be trivial
      to fix with a new primitive that clears one bit and tests another
      atomically, but that ends up not working on x86, where the only atomic
      operations that return the result end up being cmpxchg and xadd.  The
      atomic bit operations return the old value of the same bit we changed,
      not the value of an unrelated bit.
      
      On x86, we could put the lock bit in the high bit of the byte, and use
      "xadd" with that bit (where the overflow ends up not touching other
      bits), and look at the other bits of the result.  However, an even
      simpler model is to just use a regular atomic "and" to clear the lock
      bit, and then the sign bit in eflags will indicate the resulting state
      of the unrelated bit #7.
      
      So by moving the PageWaiters bit up to bit #7, we can atomically clear
      the lock bit and test the waiters bit on x86 too.  And architectures
      with LL/SC (which is all the usual RISC suspects), the particular bit
      doesn't matter, so they are fine with this approach too.
      
      This avoids the extra access to the same atomic word, and thus avoids
      the costly stall at page unlock time.
      
      The only downside is that the interface ends up being a bit odd and
      specialized: clear a bit in a byte, and test the sign bit.  Nick doesn't
      love the resulting name of the new primitive, but I'd rather make the
      name be descriptive and very clear about the limitation imposed by
      trying to work across all relevant architectures than make it be some
      generic thing that doesn't make the odd semantics explicit.
      
      So this introduces the new architecture primitive
      
          clear_bit_unlock_is_negative_byte();
      
      and adds the trivial implementation for x86.  We have a generic
      non-optimized fallback (that just does a "clear_bit()"+"test_bit(7)"
      combination) which can be overridden by any architecture that can do
      better.  According to Nick, Power has the same hickup x86 has, for
      example, but some other architectures may not even care.
      
      All these optimizations mean that my page locking stress-test (which is
      just executing a lot of small short-lived shell scripts: "make test" in
      the git source tree) no longer makes our page locking look horribly bad.
      Before all these optimizations, just the unlock_page() costs were just
      over 3% of all CPU overhead on "make test".  After this, it's down to
      0.66%, so just a quarter of the cost it used to be.
      
      (The difference on NUMA is bigger, but there this micro-optimization is
      likely less noticeable, since the big issue on NUMA was not the accesses
      to 'struct page', but the waitqueue accesses that were already removed
      by Nick's earlier commit).
      Acked-by: NNick Piggin <npiggin@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Bob Peterson <rpeterso@redhat.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Andrew Lutomirski <luto@kernel.org>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b91e1302
  7. 27 12月, 2016 1 次提交
    • J
      mm: Invalidate DAX radix tree entries only if appropriate · c6dcf52c
      Jan Kara 提交于
      Currently invalidate_inode_pages2_range() and invalidate_mapping_pages()
      just delete all exceptional radix tree entries they find. For DAX this
      is not desirable as we track cache dirtiness in these entries and when
      they are evicted, we may not flush caches although it is necessary. This
      can for example manifest when we write to the same block both via mmap
      and via write(2) (to different offsets) and fsync(2) then does not
      properly flush CPU caches when modification via write(2) was the last
      one.
      
      Create appropriate DAX functions to handle invalidation of DAX entries
      for invalidate_inode_pages2_range() and invalidate_mapping_pages() and
      wire them up into the corresponding mm functions.
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      c6dcf52c
  8. 26 12月, 2016 2 次提交
    • N
      mm: add PageWaiters indicating tasks are waiting for a page bit · 62906027
      Nicholas Piggin 提交于
      Add a new page flag, PageWaiters, to indicate the page waitqueue has
      tasks waiting. This can be tested rather than testing waitqueue_active
      which requires another cacheline load.
      
      This bit is always set when the page has tasks on page_waitqueue(page),
      and is set and cleared under the waitqueue lock. It may be set when
      there are no tasks on the waitqueue, which will cause a harmless extra
      wakeup check that will clears the bit.
      
      The generic bit-waitqueue infrastructure is no longer used for pages.
      Instead, waitqueues are used directly with a custom key type. The
      generic code was not flexible enough to have PageWaiters manipulation
      under the waitqueue lock (which simplifies concurrency).
      
      This improves the performance of page lock intensive microbenchmarks by
      2-3%.
      
      Putting two bits in the same word opens the opportunity to remove the
      memory barrier between clearing the lock bit and testing the waiters
      bit, after some work on the arch primitives (e.g., ensuring memory
      operand widths match and cover both bits).
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Bob Peterson <rpeterso@redhat.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Andrew Lutomirski <luto@kernel.org>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62906027
    • N
      mm: Use owner_priv bit for PageSwapCache, valid when PageSwapBacked · 6326fec1
      Nicholas Piggin 提交于
      A page is not added to the swap cache without being swap backed,
      so PageSwapBacked mappings can use PG_owner_priv_1 for PageSwapCache.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Bob Peterson <rpeterso@redhat.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Andrew Lutomirski <luto@kernel.org>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6326fec1
  9. 25 12月, 2016 1 次提交
  10. 21 12月, 2016 1 次提交
    • J
      mm: fadvise: avoid expensive remote LRU cache draining after FADV_DONTNEED · 4dd72b4a
      Johannes Weiner 提交于
      When FADV_DONTNEED cannot drop all pages in the range, it observes that
      some pages might still be on per-cpu LRU caches after recent
      instantiation and so initiates remote calls to all CPUs to flush their
      local caches.  However, in most cases, the fadvise happens from the same
      context that instantiated the pages, and any pre-LRU pages in the
      specified range are most likely sitting on the local CPU's LRU cache,
      and so in many cases this results in unnecessary remote calls, which, in
      a loaded system, can hold up the fadvise() call significantly.
      
      [ I didn't record it in the extreme case we observed at Facebook,
        unfortunately. We had a slow-to-respond system and noticed it
        lru_add_drain_all() leading the profile during fadvise calls. This
        patch came out of thinking about the code and how we commonly call
        FADV_DONTNEED.
      
        FWIW, I wrote a silly directory tree walker/searcher that recurses
        through /usr to read and FADV_DONTNEED each file it finds. On a 2
        socket 40 ht machine, over 1% is spent in lru_add_drain_all(). With
        the patch, that cost is gone; the local drain cost shows at 0.09%. ]
      
      Try to avoid the remote call by flushing the local LRU cache before even
      attempting to invalidate anything.  It's a cheap operation, and the
      local LRU cache is the most likely to hold any pre-LRU pages in the
      specified fadvise range.
      
      Link: http://lkml.kernel.org/r/20161214210017.GA1465@cmpxchg.orgSigned-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NHillf Danton <hillf.zj@alibaba-inc.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4dd72b4a