1. 24 3月, 2021 2 次提交
  2. 21 3月, 2021 10 次提交
  3. 20 3月, 2021 1 次提交
    • S
      cifs: fix allocation size on newly created files · 65af8f01
      Steve French 提交于
      Applications that create and extend and write to a file do not
      expect to see 0 allocation size.  When file is extended,
      set its allocation size to a plausible value until we have a
      chance to query the server for it.  When the file is cached
      this will prevent showing an impossible number of allocated
      blocks (like 0).  This fixes e.g. xfstests 614 which does
      
          1) create a file and set its size to 64K
          2) mmap write 64K to the file
          3) stat -c %b for the file (to query the number of allocated blocks)
      
      It was failing because we returned 0 blocks.  Even though we would
      return the correct cached file size, we returned an impossible
      allocation size.
      Signed-off-by: NSteve French <stfrench@microsoft.com>
      CC: <stable@vger.kernel.org>
      Reviewed-by: NAurelien Aptel <aaptel@suse.com>
      65af8f01
  4. 19 3月, 2021 2 次提交
  5. 18 3月, 2021 4 次提交
  6. 17 3月, 2021 5 次提交
    • C
      zonefs: fix to update .i_wr_refcnt correctly in zonefs_open_zone() · 6980d29c
      Chao Yu 提交于
      In zonefs_open_zone(), if opened zone count is larger than
      .s_max_open_zones threshold, we missed to recover .i_wr_refcnt,
      fix this.
      
      Fixes: b5c00e97 ("zonefs: open/close zone on file open/close")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NChao Yu <yuchao0@huawei.com>
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      6980d29c
    • O
      kernel, fs: Introduce and use set_restart_fn() and arch_set_restart_data() · 5abbe51a
      Oleg Nesterov 提交于
      Preparation for fixing get_nr_restart_syscall() on X86 for COMPAT.
      
      Add a new helper which sets restart_block->fn and calls a dummy
      arch_set_restart_data() helper.
      
      Fixes: 609c19a3 ("x86/ptrace: Stop setting TS_COMPAT in ptrace code")
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/r/20210201174641.GA17871@redhat.com
      5abbe51a
    • F
      btrfs: always pin deleted leaves when there are active tree mod log users · 485df755
      Filipe Manana 提交于
      When freeing a tree block we may end up adding its extent back to the
      free space cache/tree, as long as there are no more references for it,
      it was created in the current transaction and writeback for it never
      happened. This is generally fine, however when we have tree mod log
      operations it can result in inconsistent versions of a btree after
      unwinding extent buffers with the recorded tree mod log operations.
      
      This is because:
      
      * We only log operations for nodes (adding and removing key/pointers),
        for leaves we don't do anything;
      
      * This means that we can log a MOD_LOG_KEY_REMOVE_WHILE_FREEING operation
        for a node that points to a leaf that was deleted;
      
      * Before we apply the logged operation to unwind a node, we can have
        that leaf's extent allocated again, either as a node or as a leaf, and
        possibly for another btree. This is possible if the leaf was created in
        the current transaction and writeback for it never started, in which
        case btrfs_free_tree_block() returns its extent back to the free space
        cache/tree;
      
      * Then, before applying the tree mod log operation, some task allocates
        the metadata extent just freed before, and uses it either as a leaf or
        as a node for some btree (can be the same or another one, it does not
        matter);
      
      * After applying the MOD_LOG_KEY_REMOVE_WHILE_FREEING operation we now
        get the target node with an item pointing to the metadata extent that
        now has content different from what it had before the leaf was deleted.
        It might now belong to a different btree and be a node and not a leaf
        anymore.
      
        As a consequence, the results of searches after the unwinding can be
        unpredictable and produce unexpected results.
      
      So make sure we pin extent buffers corresponding to leaves when there
      are tree mod log users.
      
      CC: stable@vger.kernel.org # 4.14+
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      485df755
    • F
      btrfs: fix race when cloning extent buffer during rewind of an old root · dbcc7d57
      Filipe Manana 提交于
      While resolving backreferences, as part of a logical ino ioctl call or
      fiemap, we can end up hitting a BUG_ON() when replaying tree mod log
      operations of a root, triggering a stack trace like the following:
      
        ------------[ cut here ]------------
        kernel BUG at fs/btrfs/ctree.c:1210!
        invalid opcode: 0000 [#1] SMP KASAN PTI
        CPU: 1 PID: 19054 Comm: crawl_335 Tainted: G        W         5.11.0-2d11c0084b02-misc-next+ #89
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
        RIP: 0010:__tree_mod_log_rewind+0x3b1/0x3c0
        Code: 05 48 8d 74 10 (...)
        RSP: 0018:ffffc90001eb70b8 EFLAGS: 00010297
        RAX: 0000000000000000 RBX: ffff88812344e400 RCX: ffffffffb28933b6
        RDX: 0000000000000007 RSI: dffffc0000000000 RDI: ffff88812344e42c
        RBP: ffffc90001eb7108 R08: 1ffff11020b60a20 R09: ffffed1020b60a20
        R10: ffff888105b050f9 R11: ffffed1020b60a1f R12: 00000000000000ee
        R13: ffff8880195520c0 R14: ffff8881bc958500 R15: ffff88812344e42c
        FS:  00007fd1955e8700(0000) GS:ffff8881f5600000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 00007efdb7928718 CR3: 000000010103a006 CR4: 0000000000170ee0
        Call Trace:
         btrfs_search_old_slot+0x265/0x10d0
         ? lock_acquired+0xbb/0x600
         ? btrfs_search_slot+0x1090/0x1090
         ? free_extent_buffer.part.61+0xd7/0x140
         ? free_extent_buffer+0x13/0x20
         resolve_indirect_refs+0x3e9/0xfc0
         ? lock_downgrade+0x3d0/0x3d0
         ? __kasan_check_read+0x11/0x20
         ? add_prelim_ref.part.11+0x150/0x150
         ? lock_downgrade+0x3d0/0x3d0
         ? __kasan_check_read+0x11/0x20
         ? lock_acquired+0xbb/0x600
         ? __kasan_check_write+0x14/0x20
         ? do_raw_spin_unlock+0xa8/0x140
         ? rb_insert_color+0x30/0x360
         ? prelim_ref_insert+0x12d/0x430
         find_parent_nodes+0x5c3/0x1830
         ? resolve_indirect_refs+0xfc0/0xfc0
         ? lock_release+0xc8/0x620
         ? fs_reclaim_acquire+0x67/0xf0
         ? lock_acquire+0xc7/0x510
         ? lock_downgrade+0x3d0/0x3d0
         ? lockdep_hardirqs_on_prepare+0x160/0x210
         ? lock_release+0xc8/0x620
         ? fs_reclaim_acquire+0x67/0xf0
         ? lock_acquire+0xc7/0x510
         ? poison_range+0x38/0x40
         ? unpoison_range+0x14/0x40
         ? trace_hardirqs_on+0x55/0x120
         btrfs_find_all_roots_safe+0x142/0x1e0
         ? find_parent_nodes+0x1830/0x1830
         ? btrfs_inode_flags_to_xflags+0x50/0x50
         iterate_extent_inodes+0x20e/0x580
         ? tree_backref_for_extent+0x230/0x230
         ? lock_downgrade+0x3d0/0x3d0
         ? read_extent_buffer+0xdd/0x110
         ? lock_downgrade+0x3d0/0x3d0
         ? __kasan_check_read+0x11/0x20
         ? lock_acquired+0xbb/0x600
         ? __kasan_check_write+0x14/0x20
         ? _raw_spin_unlock+0x22/0x30
         ? __kasan_check_write+0x14/0x20
         iterate_inodes_from_logical+0x129/0x170
         ? iterate_inodes_from_logical+0x129/0x170
         ? btrfs_inode_flags_to_xflags+0x50/0x50
         ? iterate_extent_inodes+0x580/0x580
         ? __vmalloc_node+0x92/0xb0
         ? init_data_container+0x34/0xb0
         ? init_data_container+0x34/0xb0
         ? kvmalloc_node+0x60/0x80
         btrfs_ioctl_logical_to_ino+0x158/0x230
         btrfs_ioctl+0x205e/0x4040
         ? __might_sleep+0x71/0xe0
         ? btrfs_ioctl_get_supported_features+0x30/0x30
         ? getrusage+0x4b6/0x9c0
         ? __kasan_check_read+0x11/0x20
         ? lock_release+0xc8/0x620
         ? __might_fault+0x64/0xd0
         ? lock_acquire+0xc7/0x510
         ? lock_downgrade+0x3d0/0x3d0
         ? lockdep_hardirqs_on_prepare+0x210/0x210
         ? lockdep_hardirqs_on_prepare+0x210/0x210
         ? __kasan_check_read+0x11/0x20
         ? do_vfs_ioctl+0xfc/0x9d0
         ? ioctl_file_clone+0xe0/0xe0
         ? lock_downgrade+0x3d0/0x3d0
         ? lockdep_hardirqs_on_prepare+0x210/0x210
         ? __kasan_check_read+0x11/0x20
         ? lock_release+0xc8/0x620
         ? __task_pid_nr_ns+0xd3/0x250
         ? lock_acquire+0xc7/0x510
         ? __fget_files+0x160/0x230
         ? __fget_light+0xf2/0x110
         __x64_sys_ioctl+0xc3/0x100
         do_syscall_64+0x37/0x80
         entry_SYSCALL_64_after_hwframe+0x44/0xa9
        RIP: 0033:0x7fd1976e2427
        Code: 00 00 90 48 8b 05 (...)
        RSP: 002b:00007fd1955e5cf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
        RAX: ffffffffffffffda RBX: 00007fd1955e5f40 RCX: 00007fd1976e2427
        RDX: 00007fd1955e5f48 RSI: 00000000c038943b RDI: 0000000000000004
        RBP: 0000000001000000 R08: 0000000000000000 R09: 00007fd1955e6120
        R10: 0000557835366b00 R11: 0000000000000246 R12: 0000000000000004
        R13: 00007fd1955e5f48 R14: 00007fd1955e5f40 R15: 00007fd1955e5ef8
        Modules linked in:
        ---[ end trace ec8931a1c36e57be ]---
      
        (gdb) l *(__tree_mod_log_rewind+0x3b1)
        0xffffffff81893521 is in __tree_mod_log_rewind (fs/btrfs/ctree.c:1210).
        1205                     * the modification. as we're going backwards, we do the
        1206                     * opposite of each operation here.
        1207                     */
        1208                    switch (tm->op) {
        1209                    case MOD_LOG_KEY_REMOVE_WHILE_FREEING:
        1210                            BUG_ON(tm->slot < n);
        1211                            fallthrough;
        1212                    case MOD_LOG_KEY_REMOVE_WHILE_MOVING:
        1213                    case MOD_LOG_KEY_REMOVE:
        1214                            btrfs_set_node_key(eb, &tm->key, tm->slot);
      
      Here's what happens to hit that BUG_ON():
      
      1) We have one tree mod log user (through fiemap or the logical ino ioctl),
         with a sequence number of 1, so we have fs_info->tree_mod_seq == 1;
      
      2) Another task is at ctree.c:balance_level() and we have eb X currently as
         the root of the tree, and we promote its single child, eb Y, as the new
         root.
      
         Then, at ctree.c:balance_level(), we call:
      
            tree_mod_log_insert_root(eb X, eb Y, 1);
      
      3) At tree_mod_log_insert_root() we create tree mod log elements for each
         slot of eb X, of operation type MOD_LOG_KEY_REMOVE_WHILE_FREEING each
         with a ->logical pointing to ebX->start. These are placed in an array
         named tm_list.
         Lets assume there are N elements (N pointers in eb X);
      
      4) Then, still at tree_mod_log_insert_root(), we create a tree mod log
         element of operation type MOD_LOG_ROOT_REPLACE, ->logical set to
         ebY->start, ->old_root.logical set to ebX->start, ->old_root.level set
         to the level of eb X and ->generation set to the generation of eb X;
      
      5) Then tree_mod_log_insert_root() calls tree_mod_log_free_eb() with
         tm_list as argument. After that, tree_mod_log_free_eb() calls
         __tree_mod_log_insert() for each member of tm_list in reverse order,
         from highest slot in eb X, slot N - 1, to slot 0 of eb X;
      
      6) __tree_mod_log_insert() sets the sequence number of each given tree mod
         log operation - it increments fs_info->tree_mod_seq and sets
         fs_info->tree_mod_seq as the sequence number of the given tree mod log
         operation.
      
         This means that for the tm_list created at tree_mod_log_insert_root(),
         the element corresponding to slot 0 of eb X has the highest sequence
         number (1 + N), and the element corresponding to the last slot has the
         lowest sequence number (2);
      
      7) Then, after inserting tm_list's elements into the tree mod log rbtree,
         the MOD_LOG_ROOT_REPLACE element is inserted, which gets the highest
         sequence number, which is N + 2;
      
      8) Back to ctree.c:balance_level(), we free eb X by calling
         btrfs_free_tree_block() on it. Because eb X was created in the current
         transaction, has no other references and writeback did not happen for
         it, we add it back to the free space cache/tree;
      
      9) Later some other task T allocates the metadata extent from eb X, since
         it is marked as free space in the space cache/tree, and uses it as a
         node for some other btree;
      
      10) The tree mod log user task calls btrfs_search_old_slot(), which calls
          get_old_root(), and finally that calls __tree_mod_log_oldest_root()
          with time_seq == 1 and eb_root == eb Y;
      
      11) First iteration of the while loop finds the tree mod log element with
          sequence number N + 2, for the logical address of eb Y and of type
          MOD_LOG_ROOT_REPLACE;
      
      12) Because the operation type is MOD_LOG_ROOT_REPLACE, we don't break out
          of the loop, and set root_logical to point to tm->old_root.logical
          which corresponds to the logical address of eb X;
      
      13) On the next iteration of the while loop, the call to
          tree_mod_log_search_oldest() returns the smallest tree mod log element
          for the logical address of eb X, which has a sequence number of 2, an
          operation type of MOD_LOG_KEY_REMOVE_WHILE_FREEING and corresponds to
          the old slot N - 1 of eb X (eb X had N items in it before being freed);
      
      14) We then break out of the while loop and return the tree mod log operation
          of type MOD_LOG_ROOT_REPLACE (eb Y), and not the one for slot N - 1 of
          eb X, to get_old_root();
      
      15) At get_old_root(), we process the MOD_LOG_ROOT_REPLACE operation
          and set "logical" to the logical address of eb X, which was the old
          root. We then call tree_mod_log_search() passing it the logical
          address of eb X and time_seq == 1;
      
      16) Then before calling tree_mod_log_search(), task T adds a key to eb X,
          which results in adding a tree mod log operation of type
          MOD_LOG_KEY_ADD to the tree mod log - this is done at
          ctree.c:insert_ptr() - but after adding the tree mod log operation
          and before updating the number of items in eb X from 0 to 1...
      
      17) The task at get_old_root() calls tree_mod_log_search() and gets the
          tree mod log operation of type MOD_LOG_KEY_ADD just added by task T.
          Then it enters the following if branch:
      
          if (old_root && tm && tm->op != MOD_LOG_KEY_REMOVE_WHILE_FREEING) {
             (...)
          } (...)
      
          Calls read_tree_block() for eb X, which gets a reference on eb X but
          does not lock it - task T has it locked.
          Then it clones eb X while it has nritems set to 0 in its header, before
          task T sets nritems to 1 in eb X's header. From hereupon we use the
          clone of eb X which no other task has access to;
      
      18) Then we call __tree_mod_log_rewind(), passing it the MOD_LOG_KEY_ADD
          mod log operation we just got from tree_mod_log_search() in the
          previous step and the cloned version of eb X;
      
      19) At __tree_mod_log_rewind(), we set the local variable "n" to the number
          of items set in eb X's clone, which is 0. Then we enter the while loop,
          and in its first iteration we process the MOD_LOG_KEY_ADD operation,
          which just decrements "n" from 0 to (u32)-1, since "n" is declared with
          a type of u32. At the end of this iteration we call rb_next() to find the
          next tree mod log operation for eb X, that gives us the mod log operation
          of type MOD_LOG_KEY_REMOVE_WHILE_FREEING, for slot 0, with a sequence
          number of N + 1 (steps 3 to 6);
      
      20) Then we go back to the top of the while loop and trigger the following
          BUG_ON():
      
              (...)
              switch (tm->op) {
              case MOD_LOG_KEY_REMOVE_WHILE_FREEING:
                       BUG_ON(tm->slot < n);
                       fallthrough;
              (...)
      
          Because "n" has a value of (u32)-1 (4294967295) and tm->slot is 0.
      
      Fix this by taking a read lock on the extent buffer before cloning it at
      ctree.c:get_old_root(). This should be done regardless of the extent
      buffer having been freed and reused, as a concurrent task might be
      modifying it (while holding a write lock on it).
      Reported-by: NZygo Blaxell <ce3g8jdj@umail.furryterror.org>
      Link: https://lore.kernel.org/linux-btrfs/20210227155037.GN28049@hungrycats.org/
      Fixes: 834328a8 ("Btrfs: tree mod log's old roots could still be part of the tree")
      CC: stable@vger.kernel.org # 4.4+
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      dbcc7d57
    • D
      btrfs: fix slab cache flags for free space tree bitmap · 34e49994
      David Sterba 提交于
      The free space tree bitmap slab cache is created with SLAB_RED_ZONE but
      that's a debugging flag and not always enabled. Also the other slabs are
      created with at least SLAB_MEM_SPREAD that we want as well to average
      the memory placement cost.
      Reported-by: NVlastimil Babka <vbabka@suse.cz>
      Fixes: 3acd4850 ("btrfs: fix allocation of free space cache v1 bitmap pages")
      CC: stable@vger.kernel.org # 5.4+
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      34e49994
  7. 16 3月, 2021 7 次提交
    • A
      fuse: 32-bit user space ioctl compat for fuse device · f8425c93
      Alessio Balsini 提交于
      With a 64-bit kernel build the FUSE device cannot handle ioctl requests
      coming from 32-bit user space.  This is due to the ioctl command
      translation that generates different command identifiers that thus cannot
      be used for direct comparisons without proper manipulation.
      
      Explicitly extract type and number from the ioctl command to enable 32-bit
      user space compatibility on 64-bit kernel builds.
      Signed-off-by: NAlessio Balsini <balsini@android.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
      f8425c93
    • Q
      btrfs: subpage: make readahead work properly · 60484cd9
      Qu Wenruo 提交于
      In readahead infrastructure, we are using a lot of hard coded PAGE_SHIFT
      while we're not doing anything specific to PAGE_SIZE.
      
      One of the most affected part is the radix tree operation of
      btrfs_fs_info::reada_tree.
      
      If using PAGE_SHIFT, subpage metadata readahead is broken and does no
      help reading metadata ahead.
      
      Fix the problem by using btrfs_fs_info::sectorsize_bits so that
      readahead could work for subpage.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      60484cd9
    • Q
      btrfs: subpage: fix wild pointer access during metadata read failure · d9bb77d5
      Qu Wenruo 提交于
      [BUG]
      When running fstests for btrfs subpage read-write test, it has a very
      high chance to crash at generic/475 with the following stack:
      
       BTRFS warning (device dm-8): direct IO failed ino 510 rw 1,34817 sector 0xcdf0 len 94208 err no 10
       Unable to handle kernel paging request at virtual address ffff80001157e7c0
       CPU: 2 PID: 687125 Comm: kworker/u12:4 Tainted: G        WC        5.12.0-rc2-custom+ #5
       Hardware name: Khadas VIM3 (DT)
       Workqueue: btrfs-endio-meta btrfs_work_helper [btrfs]
       pc : queued_spin_lock_slowpath+0x1a0/0x390
       lr : do_raw_spin_lock+0xc4/0x11c
       Call trace:
        queued_spin_lock_slowpath+0x1a0/0x390
        _raw_spin_lock+0x68/0x84
        btree_readahead_hook+0x38/0xc0 [btrfs]
        end_bio_extent_readpage+0x504/0x5f4 [btrfs]
        bio_endio+0x170/0x1a4
        end_workqueue_fn+0x3c/0x60 [btrfs]
        btrfs_work_helper+0x1b0/0x1b4 [btrfs]
        process_one_work+0x22c/0x430
        worker_thread+0x70/0x3a0
        kthread+0x13c/0x140
        ret_from_fork+0x10/0x30
       Code: 910020e0 8b0200c2 f861d884 aa0203e1 (f8246827)
      
      [CAUSE]
      In end_bio_extent_readpage(), if we hit an error during read, we will
      handle the error differently for data and metadata.
      For data we queue a repair, while for metadata, we record the error and
      let the caller choose what to do.
      
      But the code is still using page->private to grab extent buffer, which
      no longer points to extent buffer for subpage metadata pages.
      
      Thus this wild pointer access leads to above crash.
      
      [FIX]
      Introduce a helper, find_extent_buffer_readpage(), to grab extent
      buffer.
      
      The difference against find_extent_buffer_nospinlock() is:
      
      - Also handles regular sectorsize == PAGE_SIZE case
      - No extent buffer refs increase/decrease
        As extent buffer under IO must have non-zero refs, so this is safe
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d9bb77d5
    • D
      zonefs: Fix O_APPEND async write handling · ebfd68cd
      Damien Le Moal 提交于
      zonefs updates the size of a sequential zone file inode only on
      completion of direct writes. When executing asynchronous append writes
      (with a file open with O_APPEND or using RWF_APPEND), the use of the
      current inode size in generic_write_checks() to set an iocb offset thus
      leads to unaligned write if an application issues an append write
      operation with another write already being executed.
      
      Fix this problem by introducing zonefs_write_checks() as a modified
      version of generic_write_checks() using the file inode wp_offset for an
      append write iocb offset. Also introduce zonefs_write_check_limits() to
      replace generic_write_check_limits() call. This zonefs special helper
      makes sure that the maximum file limit used is the maximum size of the
      file being accessed.
      
      Since zonefs_write_checks() already truncates the iov_iter, the calls
      to iov_iter_truncate() in zonefs_file_dio_write() and
      zonefs_file_buffered_write() are removed.
      
      Fixes: 8dcc1a9d ("fs: New zonefs file system")
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      ebfd68cd
    • D
      zonefs: prevent use of seq files as swap file · 1601ea06
      Damien Le Moal 提交于
      The sequential write constraint of sequential zone file prevent their
      use as swap files. Only allow conventional zone files to be used as swap
      files.
      
      Fixes: 8dcc1a9d ("fs: New zonefs file system")
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      1601ea06
    • D
      afs: Stop listxattr() from listing "afs.*" attributes · a7889c63
      David Howells 提交于
      afs_listxattr() lists all the available special afs xattrs (i.e. those in
      the "afs.*" space), no matter what type of server we're dealing with.  But
      OpenAFS servers, for example, cannot deal with some of the extra-capable
      attributes that AuriStor (YFS) servers provide.  Unfortunately, the
      presence of the afs.yfs.* attributes causes errors[1] for anything that
      tries to read them if the server is of the wrong type.
      
      Fix the problem by removing afs_listxattr() so that none of the special
      xattrs are listed (AFS doesn't support xattrs).  It does mean, however,
      that getfattr won't list them, though they can still be accessed with
      getxattr() and setxattr().
      
      This can be tested with something like:
      
      	getfattr -d -m ".*" /afs/example.com/path/to/file
      
      With this change, none of the afs.* attributes should be visible.
      
      Changes:
      ver #2:
       - Hide all of the afs.* xattrs, not just the ACL ones.
      
      Fixes: ae46578b ("afs: Get YFS ACLs and information through xattrs")
      Reported-by: NGaja Sophie Peters <gaja.peters@math.uni-hamburg.de>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NGaja Sophie Peters <gaja.peters@math.uni-hamburg.de>
      Reviewed-by: NJeffrey Altman <jaltman@auristor.com>
      Reviewed-by: NMarc Dionne <marc.dionne@auristor.com>
      cc: linux-afs@lists.infradead.org
      Link: http://lists.infradead.org/pipermail/linux-afs/2021-March/003502.html [1]
      Link: http://lists.infradead.org/pipermail/linux-afs/2021-March/003567.html # v1
      Link: http://lists.infradead.org/pipermail/linux-afs/2021-March/003573.html # v2
      a7889c63
    • D
      afs: Fix accessing YFS xattrs on a non-YFS server · 64fcbb61
      David Howells 提交于
      If someone attempts to access YFS-related xattrs (e.g. afs.yfs.acl) on a
      file on a non-YFS AFS server (such as OpenAFS), then the kernel will jump
      to a NULL function pointer because the afs_fetch_acl_operation descriptor
      doesn't point to a function for issuing an operation on a non-YFS
      server[1].
      
      Fix this by making afs_wait_for_operation() check that the issue_afs_rpc
      method is set before jumping to it and setting -ENOTSUPP if not.  This fix
      also covers other potential operations that also only exist on YFS servers.
      
      afs_xattr_get/set_yfs() then need to translate -ENOTSUPP to -ENODATA as the
      former error is internal to the kernel.
      
      The bug shows up as an oops like the following:
      
      	BUG: kernel NULL pointer dereference, address: 0000000000000000
      	[...]
      	Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6.
      	[...]
      	Call Trace:
      	 afs_wait_for_operation+0x83/0x1b0 [kafs]
      	 afs_xattr_get_yfs+0xe6/0x270 [kafs]
      	 __vfs_getxattr+0x59/0x80
      	 vfs_getxattr+0x11c/0x140
      	 getxattr+0x181/0x250
      	 ? __check_object_size+0x13f/0x150
      	 ? __fput+0x16d/0x250
      	 __x64_sys_fgetxattr+0x64/0xb0
      	 do_syscall_64+0x49/0xc0
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      	RIP: 0033:0x7fb120a9defe
      
      This was triggered with "cp -a" which attempts to copy xattrs, including
      afs ones, but is easier to reproduce with getfattr, e.g.:
      
      	getfattr -d -m ".*" /afs/openafs.org/
      
      Fixes: e49c7b2f ("afs: Build an abstraction around an "operation" concept")
      Reported-by: NGaja Sophie Peters <gaja.peters@math.uni-hamburg.de>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NGaja Sophie Peters <gaja.peters@math.uni-hamburg.de>
      Reviewed-by: NMarc Dionne <marc.dionne@auristor.com>
      Reviewed-by: NJeffrey Altman <jaltman@auristor.com>
      cc: linux-afs@lists.infradead.org
      Link: http://lists.infradead.org/pipermail/linux-afs/2021-March/003498.html [1]
      Link: http://lists.infradead.org/pipermail/linux-afs/2021-March/003566.html # v1
      Link: http://lists.infradead.org/pipermail/linux-afs/2021-March/003572.html # v2
      64fcbb61
  8. 15 3月, 2021 9 次提交
    • F
      btrfs: zoned: fix linked list corruption after log root tree allocation failure · e3d3b415
      Filipe Manana 提交于
      When using a zoned filesystem, while syncing the log, if we fail to
      allocate the root node for the log root tree, we are not removing the
      log context we allocated on stack from the list of log contexts of the
      log root tree. This means after the return from btrfs_sync_log() we get
      a corrupted linked list.
      
      Fix this by allocating the node before adding our stack allocated context
      to the list of log contexts of the log root tree.
      
      Fixes: 3ddebf27 ("btrfs: zoned: reorder log node allocation on zoned filesystem")
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: NNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e3d3b415
    • Q
      btrfs: fix qgroup data rsv leak caused by falloc failure · a3ee79bd
      Qu Wenruo 提交于
      [BUG]
      When running fsstress with only falloc workload, and a very low qgroup
      limit set, we can get qgroup data rsv leak at unmount time.
      
       BTRFS warning (device dm-0): qgroup 0/5 has unreleased space, type 0 rsv 20480
       BTRFS error (device dm-0): qgroup reserved space leaked
      
      The minimal reproducer looks like:
      
        #!/bin/bash
        dev=/dev/test/test
        mnt="/mnt/btrfs"
        fsstress=~/xfstests-dev/ltp/fsstress
        runtime=8
      
        workload()
        {
                umount $dev &> /dev/null
                umount $mnt &> /dev/null
                mkfs.btrfs -f $dev > /dev/null
                mount $dev $mnt
      
                btrfs quota en $mnt
                btrfs quota rescan -w $mnt
                btrfs qgroup limit 16m 0/5 $mnt
      
                $fsstress -w -z -f creat=10 -f fallocate=10 -p 2 -n 100 \
        		-d $mnt -v > /tmp/fsstress
      
                umount $mnt
                if dmesg | grep leak ; then
      		echo "!!! FAILED !!!"
        		exit 1
                fi
        }
      
        for (( i=0; i < $runtime; i++)); do
                echo "=== $i/$runtime==="
                workload
        done
      
      Normally it would fail before round 4.
      
      [CAUSE]
      In function insert_prealloc_file_extent(), we first call
      btrfs_qgroup_release_data() to know how many bytes are reserved for
      qgroup data rsv.
      
      Then use that @qgroup_released number to continue our work.
      
      But after we call btrfs_qgroup_release_data(), we should either queue
      @qgroup_released to delayed ref or free them manually in error path.
      
      Unfortunately, we lack the error handling to free the released bytes,
      leaking qgroup data rsv.
      
      All the error handling function outside won't help at all, as we have
      released the range, meaning in inode io tree, the EXTENT_QGROUP_RESERVED
      bit is already cleared, thus all btrfs_qgroup_free_data() call won't
      free any data rsv.
      
      [FIX]
      Add free_qgroup tag to manually free the released qgroup data rsv.
      Reported-by: NNikolay Borisov <nborisov@suse.com>
      Reported-by: NDavid Sterba <dsterba@suse.cz>
      Fixes: 9729f10a ("btrfs: inode: move qgroup reserved space release to the callers of insert_reserved_file_extent()")
      CC: stable@vger.kernel.org # 5.10+
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a3ee79bd
    • Q
      btrfs: track qgroup released data in own variable in insert_prealloc_file_extent · fbf48bb0
      Qu Wenruo 提交于
      There is a piece of weird code in insert_prealloc_file_extent(), which
      looks like:
      
      	ret = btrfs_qgroup_release_data(inode, file_offset, len);
      	if (ret < 0)
      		return ERR_PTR(ret);
      	if (trans) {
      		ret = insert_reserved_file_extent(trans, inode,
      						  file_offset, &stack_fi,
      						  true, ret);
      	...
      	}
      	extent_info.is_new_extent = true;
      	extent_info.qgroup_reserved = ret;
      	...
      
      Note how the variable @ret is abused here, and if anyone is adding code
      just after btrfs_qgroup_release_data() call, it's super easy to
      overwrite the @ret and cause tons of qgroup related bugs.
      
      Fix such abuse by introducing new variable @qgroup_released, so that we
      won't reuse the existing variable @ret.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      fbf48bb0
    • Q
      btrfs: fix wrong offset to zero out range beyond i_size · d2dcc8ed
      Qu Wenruo 提交于
      [BUG]
      The test generic/091 fails , with the following output:
      
        fsx -N 10000 -o 128000 -l 500000 -r PSIZE -t BSIZE -w BSIZE -Z -W
        mapped writes DISABLED
        Seed set to 1
        main: filesystem does not support fallocate mode FALLOC_FL_COLLAPSE_RANGE, disabling!
        main: filesystem does not support fallocate mode FALLOC_FL_INSERT_RANGE, disabling!
        skipping zero size read
        truncating to largest ever: 0xe400
        copying to largest ever: 0x1f400
        cloning to largest ever: 0x70000
        cloning to largest ever: 0x77000
        fallocating to largest ever: 0x7a120
        Mapped Read: non-zero data past EOF (0x3a7ff) page offset 0x800 is 0xf2e1 <<<
        ...
      
      [CAUSE]
      In commit c28ea613 ("btrfs: subpage: fix the false data csum mismatch error")
      end_bio_extent_readpage() changes to only zero the range inside the bvec
      for incoming subpage support.
      
      But that commit is using incorrect offset to calculate the start.
      
      For subpage, we can have a case that the whole bvec is beyond isize,
      thus we need to calculate the correct offset.
      
      But the offending commit is using @end (bvec end), other than @start
      (bvec start) to calculate the start offset.
      
      This means, we only zero the last byte of the bvec, not from the isize.
      This stupid bug makes the range beyond isize is not properly zeroed, and
      failed above test.
      
      [FIX]
      Use correct @start to calculate the range start.
      Reported-by: Nkernel test robot <oliver.sang@intel.com>
      Fixes: c28ea613 ("btrfs: subpage: fix the false data csum mismatch error")
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d2dcc8ed
    • C
      xfs: also reject BULKSTAT_SINGLE in a mount user namespace · 8723d5ba
      Christoph Hellwig 提交于
      BULKSTAT_SINGLE exposed the ondisk uids/gids just like bulkstat, and can
      be called on any inode, including ones not visible in the current mount.
      
      Fixes: f736d93d ("xfs: support idmapped mounts")
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDarrick J. Wong <djwong@kernel.org>
      Signed-off-by: NDarrick J. Wong <djwong@kernel.org>
      8723d5ba
    • D
      xfs: force log and push AIL to clear pinned inodes when aborting mount · d336f7eb
      Darrick J. Wong 提交于
      If we allocate quota inodes in the process of mounting a filesystem but
      then decide to abort the mount, it's possible that the quota inodes are
      sitting around pinned by the log.  Now that inode reclaim relies on the
      AIL to flush inodes, we have to force the log and push the AIL in
      between releasing the quota inodes and kicking off reclaim to tear down
      all the incore inodes.  Do this by extracting the bits we need from the
      unmount path and reusing them.  As an added bonus, failed writes during
      a failed mount will not retry forever now.
      
      This was originally found during a fuzz test of metadata directories
      (xfs/1546), but the actual symptom was that reclaim hung up on the quota
      inodes.
      Signed-off-by: NDarrick J. Wong <djwong@kernel.org>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      d336f7eb
    • P
      io_uring: fix sqpoll cancellation via task_work · b7f5a0bf
      Pavel Begunkov 提交于
      Running sqpoll cancellations via task_work_run() is a bad idea because
      it depends on other task works to be run, but those may be locked in
      currently running task_work_run() because of how it's (splicing the list
      in batches).
      
      Enqueue and run them through a separate callback head, namely
      struct io_sq_data::park_task_work. As a nice bonus we now precisely
      control where it's run, that's much safer than guessing where it can
      happen as it was before.
      Reported-by: NJens Axboe <axboe@kernel.dk>
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b7f5a0bf
    • P
      io_uring: add generic callback_head helpers · 9b465711
      Pavel Begunkov 提交于
      We already have helpers to run/add callback_head but taking ctx and
      working with ctx->exit_task_work. Extract generic versions of them
      implemented in terms of struct callback_head, it will be used later.
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9b465711
    • P
      io_uring: fix concurrent parking · 9e138a48
      Pavel Begunkov 提交于
      If io_sq_thread_park() of one task got rescheduled right after
      set_bit(), before it gets back to mutex_lock() there can happen
      park()/unpark() by another task with SQPOLL locking again and
      continuing running never seeing that first set_bit(SHOULD_PARK),
      so won't even try to put the mutex down for parking.
      
      It will get parked eventually when SQPOLL drops the lock for reschedule,
      but may be problematic and will get in the way of further fixes.
      
      Account number of tasks waiting for parking with a new atomic variable
      park_pending and adjust SHOULD_PARK accordingly. It doesn't entirely
      replaces SHOULD_PARK bit with this atomic var because it's convenient
      to have it as a bit in the state and will help to do optimisations
      later.
      Signed-off-by: NPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9e138a48