1. 16 5月, 2023 2 次提交
  2. 18 1月, 2023 3 次提交
  3. 16 8月, 2022 1 次提交
  4. 19 7月, 2022 2 次提交
  5. 13 7月, 2022 2 次提交
  6. 23 2月, 2022 1 次提交
  7. 29 11月, 2021 1 次提交
  8. 02 9月, 2021 1 次提交
  9. 14 7月, 2021 17 次提交
  10. 09 3月, 2021 1 次提交
    • W
      mm/filemap: add missing mem_cgroup_uncharge() to __add_to_page_cache_locked() · 19d76e3d
      Waiman Long 提交于
      stable inclusion
      from stable-5.10.15
      commit 032f8e04c0353f015d243008f039bb6e18a173c7
      bugzilla: 48167
      
      --------------------------------
      
      commit da74240e upstream.
      
      Commit 3fea5a49 ("mm: memcontrol: convert page cache to a new
      mem_cgroup_charge() API") introduced a bug in __add_to_page_cache_locked()
      causing the following splat:
      
        page dumped because: VM_BUG_ON_PAGE(page_memcg(page))
        pages's memcg:ffff8889a4116000
        ------------[ cut here ]------------
        kernel BUG at mm/memcontrol.c:2924!
        invalid opcode: 0000 [#1] SMP KASAN PTI
        CPU: 35 PID: 12345 Comm: cat Tainted: G S      W I       5.11.0-rc4-debug+ #1
        Hardware name: HP HP Z8 G4 Workstation/81C7, BIOS P60 v01.25 12/06/2017
        RIP: commit_charge+0xf4/0x130
        Call Trace:
          mem_cgroup_charge+0x175/0x770
          __add_to_page_cache_locked+0x712/0xad0
          add_to_page_cache_lru+0xc5/0x1f0
          cachefiles_read_or_alloc_pages+0x895/0x2e10 [cachefiles]
          __fscache_read_or_alloc_pages+0x6c0/0xa00 [fscache]
          __nfs_readpages_from_fscache+0x16d/0x630 [nfs]
          nfs_readpages+0x24e/0x540 [nfs]
          read_pages+0x5b1/0xc40
          page_cache_ra_unbounded+0x460/0x750
          generic_file_buffered_read_get_pages+0x290/0x1710
          generic_file_buffered_read+0x2a9/0xc30
          nfs_file_read+0x13f/0x230 [nfs]
          new_sync_read+0x3af/0x610
          vfs_read+0x339/0x4b0
          ksys_read+0xf1/0x1c0
          do_syscall_64+0x33/0x40
          entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      Before that commit, there was a try_charge() and commit_charge() in
      __add_to_page_cache_locked().  These two separated charge functions were
      replaced by a single mem_cgroup_charge().  However, it forgot to add a
      matching mem_cgroup_uncharge() when the xarray insertion failed with the
      page released back to the pool.
      
      Fix this by adding a mem_cgroup_uncharge() call when insertion error
      happens.
      
      Link: https://lkml.kernel.org/r/20210125042441.20030-1-longman@redhat.com
      Fixes: 3fea5a49 ("mm: memcontrol: convert page cache to a new mem_cgroup_charge() API")
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Reviewed-by: NAlex Shi <alex.shi@linux.alibaba.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Muchun Song <smuchun@gmail.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
      19d76e3d
  11. 12 12月, 2020 1 次提交
  12. 07 12月, 2020 1 次提交
  13. 25 11月, 2020 1 次提交
    • H
      mm: fix VM_BUG_ON(PageTail) and BUG_ON(PageWriteback) · 073861ed
      Hugh Dickins 提交于
      Twice now, when exercising ext4 looped on shmem huge pages, I have crashed
      on the PF_ONLY_HEAD check inside PageWaiters(): ext4_finish_bio() calling
      end_page_writeback() calling wake_up_page() on tail of a shmem huge page,
      no longer an ext4 page at all.
      
      The problem is that PageWriteback is not accompanied by a page reference
      (as the NOTE at the end of test_clear_page_writeback() acknowledges): as
      soon as TestClearPageWriteback has been done, that page could be removed
      from page cache, freed, and reused for something else by the time that
      wake_up_page() is reached.
      
      https://lore.kernel.org/linux-mm/20200827122019.GC14765@casper.infradead.org/
      Matthew Wilcox suggested avoiding or weakening the PageWaiters() tail
      check; but I'm paranoid about even looking at an unreferenced struct page,
      lest its memory might itself have already been reused or hotremoved (and
      wake_up_page_bit() may modify that memory with its ClearPageWaiters()).
      
      Then on crashing a second time, realized there's a stronger reason against
      that approach.  If my testing just occasionally crashes on that check,
      when the page is reused for part of a compound page, wouldn't it be much
      more common for the page to get reused as an order-0 page before reaching
      wake_up_page()?  And on rare occasions, might that reused page already be
      marked PageWriteback by its new user, and already be waited upon?  What
      would that look like?
      
      It would look like BUG_ON(PageWriteback) after wait_on_page_writeback()
      in write_cache_pages() (though I have never seen that crash myself).
      
      Matthew Wilcox explaining this to himself:
       "page is allocated, added to page cache, dirtied, writeback starts,
      
        --- thread A ---
        filesystem calls end_page_writeback()
              test_clear_page_writeback()
        --- context switch to thread B ---
        truncate_inode_pages_range() finds the page, it doesn't have writeback set,
        we delete it from the page cache.  Page gets reallocated, dirtied, writeback
        starts again.  Then we call write_cache_pages(), see
        PageWriteback() set, call wait_on_page_writeback()
        --- context switch back to thread A ---
        wake_up_page(page, PG_writeback);
        ... thread B is woken, but because the wakeup was for the old use of
        the page, PageWriteback is still set.
      
        Devious"
      
      And prior to 2a9127fc ("mm: rewrite wait_on_page_bit_common() logic")
      this would have been much less likely: before that, wake_page_function()'s
      non-exclusive case would stop walking and not wake if it found Writeback
      already set again; whereas now the non-exclusive case proceeds to wake.
      
      I have not thought of a fix that does not add a little overhead: the
      simplest fix is for end_page_writeback() to get_page() before calling
      test_clear_page_writeback(), then put_page() after wake_up_page().
      
      Was there a chance of missed wakeups before, since a page freed before
      reaching wake_up_page() would have PageWaiters cleared?  I think not,
      because each waiter does hold a reference on the page.  This bug comes
      when the old use of the page, the one we do TestClearPageWriteback on,
      had *no* waiters, so no additional page reference beyond the page cache
      (and whoever racily freed it).  The reuse of the page has a waiter
      holding a reference, and its own PageWriteback set; but the belated
      wake_up_page() has woken the reuse to hit that BUG_ON(PageWriteback).
      
      Reported-by: syzbot+3622cea378100f45d59f@syzkaller.appspotmail.com
      Reported-by: NQian Cai <cai@lca.pw>
      Fixes: 2a9127fc ("mm: rewrite wait_on_page_bit_common() logic")
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Cc: stable@vger.kernel.org # v5.8+
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      073861ed
  14. 17 11月, 2020 1 次提交
    • J
      mm: never attempt async page lock if we've transferred data already · 0abed7c6
      Jens Axboe 提交于
      We catch the case where we enter generic_file_buffered_read() with data
      already transferred, but we also need to be careful not to allow an async
      page lock if we're looping transferring data. If not, we could be
      returning -EIOCBQUEUED instead of the transferred amount, and it could
      result in double waitqueue additions as well.
      
      Cc: stable@vger.kernel.org # v5.9
      Fixes: 1a0a7853 ("mm: support async buffered reads in generic_file_buffered_read()")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      0abed7c6
  15. 18 10月, 2020 1 次提交
    • J
      mm: mark async iocb read as NOWAIT once some data has been copied · 13bd6914
      Jens Axboe 提交于
      Once we've copied some data for an iocb that is marked with IOCB_WAITQ,
      we should no longer attempt to async lock a new page. Instead make sure
      we return the copied amount, and let the caller retry, instead of
      returning -EIOCBQUEUED for a new page.
      
      This should only be possible with read-ahead disabled on the below
      device, and multiple threads racing on the same file. Haven't been able
      to reproduce on anything else.
      
      Cc: stable@vger.kernel.org # v5.9
      Fixes: 1a0a7853 ("mm: support async buffered reads in generic_file_buffered_read()")
      Reported-by: NKent Overstreet <kent.overstreet@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      13bd6914
  16. 17 10月, 2020 4 次提交