1. 18 1月, 2023 2 次提交
  2. 30 8月, 2022 1 次提交
  3. 16 8月, 2022 1 次提交
  4. 25 5月, 2022 4 次提交
  5. 29 1月, 2022 1 次提交
  6. 14 1月, 2022 1 次提交
  7. 31 12月, 2021 1 次提交
  8. 29 12月, 2021 1 次提交
  9. 27 12月, 2021 1 次提交
    • G
      fs: fix a hungtask problem when freeze/unfreeze fs · 77922e15
      geruijun 提交于
      euleros inclusion
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4M0EE?from=project-issue
      
      --------------------------------
      
      We found the following deadlock when running xfstests generic/390 with ext4
      filesystem, and simutaneously offlining/onlining the disk we tested. It will
      cause a deadlock whose call trace is like this:
      
      fsstress        D    0 11672  11625 0x00000080
      Call Trace:
       ? __schedule+0x2fc/0x930
       ? filename_parentat+0x10b/0x1a0
       schedule+0x28/0x70
       rwsem_down_read_failed+0x102/0x1c0
       ? __percpu_down_read+0x93/0xb0
       __percpu_down_read+0x93/0xb0
       __sb_start_write+0x5f/0x70
       mnt_want_write+0x20/0x50
       do_renameat2+0x1f3/0x550
       __x64_sys_rename+0x1c/0x20
       do_syscall_64+0x5b/0x1b0
       entry_SYSCALL_64_after_hwframe+0x65/0xca
      
      The root cause is that when ext4 hits IO error due to disk being
      offline, it will switch itself into read-only state. When it is frozen
      at that moment, following thaw_super() call will not unlock percpu
      freeze semaphores (as the fs is read-only) causing the deadlock.
      
      Fix the problem by tracking whether the superblock was read-only at the
      time we were freezing it.
      Reported-and-tested-by: NShijie Luo <luoshijie1@huawei.com>
      Signed-off-by: Ngeruijun <geruijun@huawei.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NZhang Yi <yi.zhang@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      77922e15
  10. 29 11月, 2021 2 次提交
  11. 19 10月, 2021 2 次提交
  12. 15 10月, 2021 1 次提交
  13. 26 9月, 2021 1 次提交
  14. 14 7月, 2021 1 次提交
  15. 29 1月, 2021 1 次提交
    • Y
      proc: fix ubsan warning in mem_lseek · 1bb26e86
      yangerkun 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 47438
      CVE: NA
      ---------------------------
      
      UBSAN has reported a overflow with mem_lseek. And it's fine with
      mem_open set file mode with FMODE_UNSIGNED_OFFSET(memory_lseek).
      However, another file use mem_lseek do lseek can have not
      FMODE_UNSIGNED_OFFSET(proc_kpagecount_operations/proc_pagemap_operations),
      fix it by checking overflow and FMODE_UNSIGNED_OFFSET.
      Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com>
      
      ==================================================================
      UBSAN: Undefined behaviour in ../fs/proc/base.c:941:15
      signed integer overflow:
      4611686018427387904 + 4611686018427387904 cannot be represented in type 'long long int'
      CPU: 4 PID: 4762 Comm: syz-executor.1 Not tainted 4.4.189 #3
      Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015
      Call trace:
      [<ffffff90080a5f28>] dump_backtrace+0x0/0x590 arch/arm64/kernel/traps.c:91
      [<ffffff90080a64f0>] show_stack+0x38/0x60 arch/arm64/kernel/traps.c:234
      [<ffffff9008986a34>] __dump_stack lib/dump_stack.c:15 [inline]
      [<ffffff9008986a34>] dump_stack+0x128/0x184 lib/dump_stack.c:51
      [<ffffff9008a2d120>] ubsan_epilogue+0x34/0x9c lib/ubsan.c:166
      [<ffffff9008a2d8b8>] handle_overflow+0x228/0x280 lib/ubsan.c:197
      [<ffffff9008a2da2c>] __ubsan_handle_add_overflow+0x4c/0x68 lib/ubsan.c:204
      [<ffffff900862b9f4>] mem_lseek+0x12c/0x130 fs/proc/base.c:941
      [<ffffff90084ef78c>] vfs_llseek fs/read_write.c:260 [inline]
      [<ffffff90084ef78c>] SYSC_lseek fs/read_write.c:285 [inline]
      [<ffffff90084ef78c>] SyS_lseek+0x164/0x1f0 fs/read_write.c:276
      [<ffffff9008093c80>] el0_svc_naked+0x30/0x34
      ==================================================================
      Signed-off-by: Nyangerkun <yangerkun@huawei.com>
      Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com>
      Signed-off-by: Nzhangyi (F) <yi.zhang@huawei.com>
      (cherry picked from commit a422358aa04c53a08b215b8dcd6814d916ef5cf1)
      
      Conflicts:
      	fs/read_write.c
      Signed-off-by: NLi Ming <limingming.li@huawei.com>
      Reviewed-by: Nzhangyi (F) <yi.zhang@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      1bb26e86
  16. 12 1月, 2021 1 次提交
  17. 11 11月, 2020 2 次提交
  18. 30 10月, 2020 1 次提交
  19. 17 10月, 2020 2 次提交
  20. 16 10月, 2020 2 次提交
  21. 15 10月, 2020 1 次提交
    • D
      vfs: move generic_remap_checks out of mm · 02e83f46
      Darrick J. Wong 提交于
      I would like to move all the generic helpers for the vfs remap range
      functionality (aka clonerange and dedupe) into a separate file so that
      they won't be scattered across the vfs and the mm subsystems.  The
      eventual goal is to be able to deselect remap_range.c if none of the
      filesystems need that code, but the tricky part here is picking a
      stable(ish) part of the merge window to rearrange code.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      02e83f46
  22. 14 10月, 2020 1 次提交
    • Y
      mm, fadvise: improve the expensive remote LRU cache draining after FADV_DONTNEED · eb1d7a65
      Yafang Shao 提交于
      Our users reported that there're some random latency spikes when their RT
      process is running.  Finally we found that latency spike is caused by
      FADV_DONTNEED.  Which may call lru_add_drain_all() to drain LRU cache on
      remote CPUs, and then waits the per-cpu work to complete.  The wait time
      is uncertain, which may be tens millisecond.
      
      That behavior is unreasonable, because this process is bound to a specific
      CPU and the file is only accessed by itself, IOW, there should be no
      pagecache pages on a per-cpu pagevec of a remote CPU.  That unreasonable
      behavior is partially caused by the wrong comparation of the number of
      invalidated pages and the number of the target.  For example,
      
              if (count < (end_index - start_index + 1))
      
      The count above is how many pages were invalidated in the local CPU, and
      (end_index - start_index + 1) is how many pages should be invalidated.
      The usage of (end_index - start_index + 1) is incorrect, because they are
      virtual addresses, which may not mapped to pages.  Besides that, there may
      be holes between start and end.  So we'd better check whether there are
      still pages on per-cpu pagevec after drain the local cpu, and then decide
      whether or not to call lru_add_drain_all().
      
      After I applied it with a hotfix to our production environment, most of
      the lru_add_drain_all() can be avoided.
      Suggested-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NYafang Shao <laoar.shao@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Link: https://lkml.kernel.org/r/20200923133318.14373-1-laoar.shao@gmail.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      eb1d7a65
  23. 07 10月, 2020 1 次提交
  24. 05 10月, 2020 3 次提交
  25. 03 10月, 2020 1 次提交
    • C
      iov_iter: refactor rw_copy_check_uvector and import_iovec · bfdc5970
      Christoph Hellwig 提交于
      Split rw_copy_check_uvector into two new helpers with more sensible
      calling conventions:
      
       - iovec_from_user copies a iovec from userspace either into the provided
         stack buffer if it fits, or allocates a new buffer for it.  Returns
         the actually used iovec.  It also verifies that iov_len does fit a
         signed type, and handles compat iovecs if the compat flag is set.
       - __import_iovec consolidates the native and compat versions of
         import_iovec. It calls iovec_from_user, then validates each iovec
         actually points to user addresses, and ensures the total length
         doesn't overflow.
      
      This has two major implications:
      
       - the access_process_vm case loses the total lenght checking, which
         wasn't required anyway, given that each call receives two iovecs
         for the local and remote side of the operation, and it verifies
         the total length on the local side already.
       - instead of a single loop there now are two loops over the iovecs.
         Given that the iovecs are cache hot this doesn't make a major
         difference
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      bfdc5970
  26. 01 10月, 2020 2 次提交
  27. 27 9月, 2020 2 次提交