1. 10 3月, 2022 5 次提交
  2. 06 3月, 2022 2 次提交
  3. 04 3月, 2022 2 次提交
    • F
      btrfs: fallback to blocking mode when doing async dio over multiple extents · ca93e44b
      Filipe Manana 提交于
      Some users recently reported that MariaDB was getting a read corruption
      when using io_uring on top of btrfs. This started to happen in 5.16,
      after commit 51bd9563 ("btrfs: fix deadlock due to page faults
      during direct IO reads and writes"). That changed btrfs to use the new
      iomap flag IOMAP_DIO_PARTIAL and to disable page faults before calling
      iomap_dio_rw(). This was necessary to fix deadlocks when the iovector
      corresponds to a memory mapped file region. That type of scenario is
      exercised by test case generic/647 from fstests.
      
      For this MariaDB scenario, we attempt to read 16K from file offset X
      using IOCB_NOWAIT and io_uring. In that range we have 4 extents, each
      with a size of 4K, and what happens is the following:
      
      1) btrfs_direct_read() disables page faults and calls iomap_dio_rw();
      
      2) iomap creates a struct iomap_dio object, its reference count is
         initialized to 1 and its ->size field is initialized to 0;
      
      3) iomap calls btrfs_dio_iomap_begin() with file offset X, which finds
         the first 4K extent, and setups an iomap for this extent consisting
         of a single page;
      
      4) At iomap_dio_bio_iter(), we are able to access the first page of the
         buffer (struct iov_iter) with bio_iov_iter_get_pages() without
         triggering a page fault;
      
      5) iomap submits a bio for this 4K extent
         (iomap_dio_submit_bio() -> btrfs_submit_direct()) and increments
         the refcount on the struct iomap_dio object to 2; The ->size field
         of the struct iomap_dio object is incremented to 4K;
      
      6) iomap calls btrfs_iomap_begin() again, this time with a file
         offset of X + 4K. There we setup an iomap for the next extent
         that also has a size of 4K;
      
      7) Then at iomap_dio_bio_iter() we call bio_iov_iter_get_pages(),
         which tries to access the next page (2nd page) of the buffer.
         This triggers a page fault and returns -EFAULT;
      
      8) At __iomap_dio_rw() we see the -EFAULT, but we reset the error
         to 0 because we passed the flag IOMAP_DIO_PARTIAL to iomap and
         the struct iomap_dio object has a ->size value of 4K (we submitted
         a bio for an extent already). The 'wait_for_completion' variable
         is not set to true, because our iocb has IOCB_NOWAIT set;
      
      9) At the bottom of __iomap_dio_rw(), we decrement the reference count
         of the struct iomap_dio object from 2 to 1. Because we were not
         the only ones holding a reference on it and 'wait_for_completion' is
         set to false, -EIOCBQUEUED is returned to btrfs_direct_read(), which
         just returns it up the callchain, up to io_uring;
      
      10) The bio submitted for the first extent (step 5) completes and its
          bio endio function, iomap_dio_bio_end_io(), decrements the last
          reference on the struct iomap_dio object, resulting in calling
          iomap_dio_complete_work() -> iomap_dio_complete().
      
      11) At iomap_dio_complete() we adjust the iocb->ki_pos from X to X + 4K
          and return 4K (the amount of io done) to iomap_dio_complete_work();
      
      12) iomap_dio_complete_work() calls the iocb completion callback,
          iocb->ki_complete() with a second argument value of 4K (total io
          done) and the iocb with the adjust ki_pos of X + 4K. This results
          in completing the read request for io_uring, leaving it with a
          result of 4K bytes read, and only the first page of the buffer
          filled in, while the remaining 3 pages, corresponding to the other
          3 extents, were not filled;
      
      13) For the application, the result is unexpected because if we ask
          to read N bytes, it expects to get N bytes read as long as those
          N bytes don't cross the EOF (i_size).
      
      MariaDB reports this as an error, as it's not expecting a short read,
      since it knows it's asking for read operations fully within the i_size
      boundary. This is typical in many applications, but it may also be
      questionable if they should react to such short reads by issuing more
      read calls to get the remaining data. Nevertheless, the short read
      happened due to a change in btrfs regarding how it deals with page
      faults while in the middle of a read operation, and there's no reason
      why btrfs can't have the previous behaviour of returning the whole data
      that was requested by the application.
      
      The problem can also be triggered with the following simple program:
      
        /* Get O_DIRECT */
        #ifndef _GNU_SOURCE
        #define _GNU_SOURCE
        #endif
      
        #include <stdio.h>
        #include <stdlib.h>
        #include <unistd.h>
        #include <fcntl.h>
        #include <errno.h>
        #include <string.h>
        #include <liburing.h>
      
        int main(int argc, char *argv[])
        {
            char *foo_path;
            struct io_uring ring;
            struct io_uring_sqe *sqe;
            struct io_uring_cqe *cqe;
            struct iovec iovec;
            int fd;
            long pagesize;
            void *write_buf;
            void *read_buf;
            ssize_t ret;
            int i;
      
            if (argc != 2) {
                fprintf(stderr, "Use: %s <directory>\n", argv[0]);
                return 1;
            }
      
            foo_path = malloc(strlen(argv[1]) + 5);
            if (!foo_path) {
                fprintf(stderr, "Failed to allocate memory for file path\n");
                return 1;
            }
            strcpy(foo_path, argv[1]);
            strcat(foo_path, "/foo");
      
            /*
             * Create file foo with 2 extents, each with a size matching
             * the page size. Then allocate a buffer to read both extents
             * with io_uring, using O_DIRECT and IOCB_NOWAIT. Before doing
             * the read with io_uring, access the first page of the buffer
             * to fault it in, so that during the read we only trigger a
             * page fault when accessing the second page of the buffer.
             */
             fd = open(foo_path, O_CREAT | O_TRUNC | O_WRONLY |
                      O_DIRECT, 0666);
             if (fd == -1) {
                 fprintf(stderr,
                         "Failed to create file 'foo': %s (errno %d)",
                         strerror(errno), errno);
                 return 1;
             }
      
             pagesize = sysconf(_SC_PAGE_SIZE);
             ret = posix_memalign(&write_buf, pagesize, 2 * pagesize);
             if (ret) {
                 fprintf(stderr, "Failed to allocate write buffer\n");
                 return 1;
             }
      
             memset(write_buf, 0xab, pagesize);
             memset(write_buf + pagesize, 0xcd, pagesize);
      
             /* Create 2 extents, each with a size matching page size. */
             for (i = 0; i < 2; i++) {
                 ret = pwrite(fd, write_buf + i * pagesize, pagesize,
                              i * pagesize);
                 if (ret != pagesize) {
                     fprintf(stderr,
                           "Failed to write to file, ret = %ld errno %d (%s)\n",
                            ret, errno, strerror(errno));
                     return 1;
                 }
                 ret = fsync(fd);
                 if (ret != 0) {
                     fprintf(stderr, "Failed to fsync file\n");
                     return 1;
                 }
             }
      
             close(fd);
             fd = open(foo_path, O_RDONLY | O_DIRECT);
             if (fd == -1) {
                 fprintf(stderr,
                         "Failed to open file 'foo': %s (errno %d)",
                         strerror(errno), errno);
                 return 1;
             }
      
             ret = posix_memalign(&read_buf, pagesize, 2 * pagesize);
             if (ret) {
                 fprintf(stderr, "Failed to allocate read buffer\n");
                 return 1;
             }
      
             /*
              * Fault in only the first page of the read buffer.
              * We want to trigger a page fault for the 2nd page of the
              * read buffer during the read operation with io_uring
              * (O_DIRECT and IOCB_NOWAIT).
              */
             memset(read_buf, 0, 1);
      
             ret = io_uring_queue_init(1, &ring, 0);
             if (ret != 0) {
                 fprintf(stderr, "Failed to create io_uring queue\n");
                 return 1;
             }
      
             sqe = io_uring_get_sqe(&ring);
             if (!sqe) {
                 fprintf(stderr, "Failed to get io_uring sqe\n");
                 return 1;
             }
      
             iovec.iov_base = read_buf;
             iovec.iov_len = 2 * pagesize;
             io_uring_prep_readv(sqe, fd, &iovec, 1, 0);
      
             ret = io_uring_submit_and_wait(&ring, 1);
             if (ret != 1) {
                 fprintf(stderr,
                         "Failed at io_uring_submit_and_wait()\n");
                 return 1;
             }
      
             ret = io_uring_wait_cqe(&ring, &cqe);
             if (ret < 0) {
                 fprintf(stderr, "Failed at io_uring_wait_cqe()\n");
                 return 1;
             }
      
             printf("io_uring read result for file foo:\n\n");
             printf("  cqe->res == %d (expected %d)\n", cqe->res, 2 * pagesize);
             printf("  memcmp(read_buf, write_buf) == %d (expected 0)\n",
                    memcmp(read_buf, write_buf, 2 * pagesize));
      
             io_uring_cqe_seen(&ring, cqe);
             io_uring_queue_exit(&ring);
      
             return 0;
        }
      
      When running it on an unpatched kernel:
      
        $ gcc io_uring_test.c -luring
        $ mkfs.btrfs -f /dev/sda
        $ mount /dev/sda /mnt/sda
        $ ./a.out /mnt/sda
        io_uring read result for file foo:
      
          cqe->res == 4096 (expected 8192)
          memcmp(read_buf, write_buf) == -205 (expected 0)
      
      After this patch, the read always returns 8192 bytes, with the buffer
      filled with the correct data. Although that reproducer always triggers
      the bug in my test vms, it's possible that it will not be so reliable
      on other environments, as that can happen if the bio for the first
      extent completes and decrements the reference on the struct iomap_dio
      object before we do the atomic_dec_and_test() on the reference at
      __iomap_dio_rw().
      
      Fix this in btrfs by having btrfs_dio_iomap_begin() return -EAGAIN
      whenever we try to satisfy a non blocking IO request (IOMAP_NOWAIT flag
      set) over a range that spans multiple extents (or a mix of extents and
      holes). This avoids returning success to the caller when we only did
      partial IO, which is not optimal for writes and for reads it's actually
      incorrect, as the caller doesn't expect to get less bytes read than it has
      requested (unless EOF is crossed), as previously mentioned. This is also
      the type of behaviour that xfs follows (xfs_direct_write_iomap_begin()),
      even though it doesn't use IOMAP_DIO_PARTIAL.
      
      A test case for fstests will follow soon.
      
      Link: https://lore.kernel.org/linux-btrfs/CABVffEM0eEWho+206m470rtM0d9J8ue85TtR-A_oVTuGLWFicA@mail.gmail.com/
      Link: https://lore.kernel.org/linux-btrfs/CAHF2GV6U32gmqSjLe=XKgfcZAmLCiH26cJ2OnHGp5x=VAH4OHQ@mail.gmail.com/
      CC: stable@vger.kernel.org # 5.16+
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ca93e44b
    • D
      cachefiles: Fix incorrect length to fallocate() · b08968f1
      David Howells 提交于
      When cachefiles_shorten_object() calls fallocate() to shape the cache
      file to match the DIO size, it passes the total file size it wants to
      achieve, not the amount of zeros that should be inserted.  Since this is
      meant to preallocate that amount of storage for the file, it can cause
      the cache to fill up the disk and hit ENOSPC.
      
      Fix this by passing the length actually required to go from the current
      EOF to the desired EOF.
      
      Fixes: 7623ed67 ("cachefiles: Implement cookie resize for truncate")
      Reported-by: NJeffle Xu <jefflexu@linux.alibaba.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NJeff Layton <jlayton@kernel.org>
      Reviewed-by: NJeff Layton <jlayton@kernel.org>
      cc: linux-cachefs@redhat.com
      Link: https://lore.kernel.org/r/164630854858.3665356.17419701804248490708.stgit@warthog.procyon.org.uk # v1
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b08968f1
  4. 02 3月, 2022 10 次提交
    • F
      btrfs: add missing run of delayed items after unlink during log replay · 4751dc99
      Filipe Manana 提交于
      During log replay, whenever we need to check if a name (dentry) exists in
      a directory we do searches on the subvolume tree for inode references or
      or directory entries (BTRFS_DIR_INDEX_KEY keys, and BTRFS_DIR_ITEM_KEY
      keys as well, before kernel 5.17). However when during log replay we
      unlink a name, through btrfs_unlink_inode(), we may not delete inode
      references and dir index keys from a subvolume tree and instead just add
      the deletions to the delayed inode's delayed items, which will only be
      run when we commit the transaction used for log replay. This means that
      after an unlink operation during log replay, if we attempt to search for
      the same name during log replay, we will not see that the name was already
      deleted, since the deletion is recorded only on the delayed items.
      
      We run delayed items after every unlink operation during log replay,
      except at unlink_old_inode_refs() and at add_inode_ref(). This was due
      to an overlook, as delayed items should be run after evert unlink, for
      the reasons stated above.
      
      So fix those two cases.
      
      Fixes: 0d836392 ("Btrfs: fix mount failure after fsync due to hard link recreation")
      Fixes: 1f250e92 ("Btrfs: fix log replay failure after unlink and link combination")
      CC: stable@vger.kernel.org # 4.19+
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      4751dc99
    • S
      btrfs: qgroup: fix deadlock between rescan worker and remove qgroup · d4aef1e1
      Sidong Yang 提交于
      The commit e804861b ("btrfs: fix deadlock between quota disable and
      qgroup rescan worker") by Kawasaki resolves deadlock between quota
      disable and qgroup rescan worker. But also there is a deadlock case like
      it. It's about enabling or disabling quota and creating or removing
      qgroup. It can be reproduced in simple script below.
      
      for i in {1..100}
      do
          btrfs quota enable /mnt &
          btrfs qgroup create 1/0 /mnt &
          btrfs qgroup destroy 1/0 /mnt &
          btrfs quota disable /mnt &
      done
      
      Here's why the deadlock happens:
      
      1) The quota rescan task is running.
      
      2) Task A calls btrfs_quota_disable(), locks the qgroup_ioctl_lock
         mutex, and then calls btrfs_qgroup_wait_for_completion(), to wait for
         the quota rescan task to complete.
      
      3) Task B calls btrfs_remove_qgroup() and it blocks when trying to lock
         the qgroup_ioctl_lock mutex, because it's being held by task A. At that
         point task B is holding a transaction handle for the current transaction.
      
      4) The quota rescan task calls btrfs_commit_transaction(). This results
         in it waiting for all other tasks to release their handles on the
         transaction, but task B is blocked on the qgroup_ioctl_lock mutex
         while holding a handle on the transaction, and that mutex is being held
         by task A, which is waiting for the quota rescan task to complete,
         resulting in a deadlock between these 3 tasks.
      
      To resolve this issue, the thread disabling quota should unlock
      qgroup_ioctl_lock before waiting rescan completion. Move
      btrfs_qgroup_wait_for_completion() after unlock of qgroup_ioctl_lock.
      
      Fixes: e804861b ("btrfs: fix deadlock between quota disable and qgroup rescan worker")
      CC: stable@vger.kernel.org # 5.4+
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NShin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
      Signed-off-by: NSidong Yang <realwakka@gmail.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d4aef1e1
    • O
      btrfs: fix relocation crash due to premature return from btrfs_commit_transaction() · 5fd76bf3
      Omar Sandoval 提交于
      We are seeing crashes similar to the following trace:
      
      [38.969182] WARNING: CPU: 20 PID: 2105 at fs/btrfs/relocation.c:4070 btrfs_relocate_block_group+0x2dc/0x340 [btrfs]
      [38.973556] CPU: 20 PID: 2105 Comm: btrfs Not tainted 5.17.0-rc4 #54
      [38.974580] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
      [38.976539] RIP: 0010:btrfs_relocate_block_group+0x2dc/0x340 [btrfs]
      [38.980336] RSP: 0000:ffffb0dd42e03c20 EFLAGS: 00010206
      [38.981218] RAX: ffff96cfc4ede800 RBX: ffff96cfc3ce0000 RCX: 000000000002ca14
      [38.982560] RDX: 0000000000000000 RSI: 4cfd109a0bcb5d7f RDI: ffff96cfc3ce0360
      [38.983619] RBP: ffff96cfc309c000 R08: 0000000000000000 R09: 0000000000000000
      [38.984678] R10: ffff96cec0000001 R11: ffffe84c80000000 R12: ffff96cfc4ede800
      [38.985735] R13: 0000000000000000 R14: 0000000000000000 R15: ffff96cfc3ce0360
      [38.987146] FS:  00007f11c15218c0(0000) GS:ffff96d6dfb00000(0000) knlGS:0000000000000000
      [38.988662] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [38.989398] CR2: 00007ffc922c8e60 CR3: 00000001147a6001 CR4: 0000000000370ee0
      [38.990279] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [38.991219] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [38.992528] Call Trace:
      [38.992854]  <TASK>
      [38.993148]  btrfs_relocate_chunk+0x27/0xe0 [btrfs]
      [38.993941]  btrfs_balance+0x78e/0xea0 [btrfs]
      [38.994801]  ? vsnprintf+0x33c/0x520
      [38.995368]  ? __kmalloc_track_caller+0x351/0x440
      [38.996198]  btrfs_ioctl_balance+0x2b9/0x3a0 [btrfs]
      [38.997084]  btrfs_ioctl+0x11b0/0x2da0 [btrfs]
      [38.997867]  ? mod_objcg_state+0xee/0x340
      [38.998552]  ? seq_release+0x24/0x30
      [38.999184]  ? proc_nr_files+0x30/0x30
      [38.999654]  ? call_rcu+0xc8/0x2f0
      [39.000228]  ? __x64_sys_ioctl+0x84/0xc0
      [39.000872]  ? btrfs_ioctl_get_supported_features+0x30/0x30 [btrfs]
      [39.001973]  __x64_sys_ioctl+0x84/0xc0
      [39.002566]  do_syscall_64+0x3a/0x80
      [39.003011]  entry_SYSCALL_64_after_hwframe+0x44/0xae
      [39.003735] RIP: 0033:0x7f11c166959b
      [39.007324] RSP: 002b:00007fff2543e998 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
      [39.008521] RAX: ffffffffffffffda RBX: 00007f11c1521698 RCX: 00007f11c166959b
      [39.009833] RDX: 00007fff2543ea40 RSI: 00000000c4009420 RDI: 0000000000000003
      [39.011270] RBP: 0000000000000003 R08: 0000000000000013 R09: 00007f11c16f94e0
      [39.012581] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff25440df3
      [39.014046] R13: 0000000000000000 R14: 00007fff2543ea40 R15: 0000000000000001
      [39.015040]  </TASK>
      [39.015418] ---[ end trace 0000000000000000 ]---
      [43.131559] ------------[ cut here ]------------
      [43.132234] kernel BUG at fs/btrfs/extent-tree.c:2717!
      [43.133031] invalid opcode: 0000 [#1] PREEMPT SMP PTI
      [43.133702] CPU: 1 PID: 1839 Comm: btrfs Tainted: G        W         5.17.0-rc4 #54
      [43.134863] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014
      [43.136426] RIP: 0010:unpin_extent_range+0x37a/0x4f0 [btrfs]
      [43.139913] RSP: 0000:ffffb0dd4216bc70 EFLAGS: 00010246
      [43.140629] RAX: 0000000000000000 RBX: ffff96cfc34490f8 RCX: 0000000000000001
      [43.141604] RDX: 0000000080000001 RSI: 0000000051d00000 RDI: 00000000ffffffff
      [43.142645] RBP: 0000000000000000 R08: 0000000000000000 R09: ffff96cfd07dca50
      [43.143669] R10: ffff96cfc46e8a00 R11: fffffffffffec000 R12: 0000000041d00000
      [43.144657] R13: ffff96cfc3ce0000 R14: ffffb0dd4216bd08 R15: 0000000000000000
      [43.145686] FS:  00007f7657dd68c0(0000) GS:ffff96d6df640000(0000) knlGS:0000000000000000
      [43.146808] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [43.147584] CR2: 00007f7fe81bf5b0 CR3: 00000001093ee004 CR4: 0000000000370ee0
      [43.148589] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [43.149581] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [43.150559] Call Trace:
      [43.150904]  <TASK>
      [43.151253]  btrfs_finish_extent_commit+0x88/0x290 [btrfs]
      [43.152127]  btrfs_commit_transaction+0x74f/0xaa0 [btrfs]
      [43.152932]  ? btrfs_attach_transaction_barrier+0x1e/0x50 [btrfs]
      [43.153786]  btrfs_ioctl+0x1edc/0x2da0 [btrfs]
      [43.154475]  ? __check_object_size+0x150/0x170
      [43.155170]  ? preempt_count_add+0x49/0xa0
      [43.155753]  ? __x64_sys_ioctl+0x84/0xc0
      [43.156437]  ? btrfs_ioctl_get_supported_features+0x30/0x30 [btrfs]
      [43.157456]  __x64_sys_ioctl+0x84/0xc0
      [43.157980]  do_syscall_64+0x3a/0x80
      [43.158543]  entry_SYSCALL_64_after_hwframe+0x44/0xae
      [43.159231] RIP: 0033:0x7f7657f1e59b
      [43.161819] RSP: 002b:00007ffda5cd1658 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
      [43.162702] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f7657f1e59b
      [43.163526] RDX: 0000000000000000 RSI: 0000000000009408 RDI: 0000000000000003
      [43.164358] RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
      [43.165208] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
      [43.166029] R13: 00005621b91c3232 R14: 00005621b91ba580 R15: 00007ffda5cd1800
      [43.166863]  </TASK>
      [43.167125] Modules linked in: btrfs blake2b_generic xor pata_acpi ata_piix libata raid6_pq scsi_mod libcrc32c virtio_net virtio_rng net_failover rng_core failover scsi_common
      [43.169552] ---[ end trace 0000000000000000 ]---
      [43.171226] RIP: 0010:unpin_extent_range+0x37a/0x4f0 [btrfs]
      [43.174767] RSP: 0000:ffffb0dd4216bc70 EFLAGS: 00010246
      [43.175600] RAX: 0000000000000000 RBX: ffff96cfc34490f8 RCX: 0000000000000001
      [43.176468] RDX: 0000000080000001 RSI: 0000000051d00000 RDI: 00000000ffffffff
      [43.177357] RBP: 0000000000000000 R08: 0000000000000000 R09: ffff96cfd07dca50
      [43.178271] R10: ffff96cfc46e8a00 R11: fffffffffffec000 R12: 0000000041d00000
      [43.179178] R13: ffff96cfc3ce0000 R14: ffffb0dd4216bd08 R15: 0000000000000000
      [43.180071] FS:  00007f7657dd68c0(0000) GS:ffff96d6df800000(0000) knlGS:0000000000000000
      [43.181073] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [43.181808] CR2: 00007fe09905f010 CR3: 00000001093ee004 CR4: 0000000000370ee0
      [43.182706] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [43.183591] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      
      We first hit the WARN_ON(rc->block_group->pinned > 0) in
      btrfs_relocate_block_group() and then the BUG_ON(!cache) in
      unpin_extent_range(). This tells us that we are exiting relocation and
      removing the block group with bytes still pinned for that block group.
      This is supposed to be impossible: the last thing relocate_block_group()
      does is commit the transaction to get rid of pinned extents.
      
      Commit d0c2f4fa ("btrfs: make concurrent fsyncs wait less when
      waiting for a transaction commit") introduced an optimization so that
      commits from fsync don't have to wait for the previous commit to unpin
      extents. This was only intended to affect fsync, but it inadvertently
      made it possible for any commit to skip waiting for the previous commit
      to unpin. This is because if a call to btrfs_commit_transaction() finds
      that another thread is already committing the transaction, it waits for
      the other thread to complete the commit and then returns. If that other
      thread was in fsync, then it completes the commit without completing the
      previous commit. This makes the following sequence of events possible:
      
      Thread 1____________________|Thread 2 (fsync)_____________________|Thread 3 (balance)___________________
      btrfs_commit_transaction(N) |                                     |
        btrfs_run_delayed_refs    |                                     |
          pin extents             |                                     |
        ...                       |                                     |
        state = UNBLOCKED         |btrfs_sync_file                      |
                                  |  btrfs_start_transaction(N + 1)     |relocate_block_group
                                  |                                     |  btrfs_join_transaction(N + 1)
                                  |  btrfs_commit_transaction(N + 1)    |
        ...                       |  trans->state = COMMIT_START        |
                                  |                                     |  btrfs_commit_transaction(N + 1)
                                  |                                     |    wait_for_commit(N + 1, COMPLETED)
                                  |  wait_for_commit(N, SUPER_COMMITTED)|
        state = SUPER_COMMITTED   |  ...                                |
        btrfs_finish_extent_commit|                                     |
          unpin_extent_range()    |  trans->state = COMPLETED           |
                                  |                                     |    return
                                  |                                     |
          ...                     |                                     |Thread 1 isn't done, so pinned > 0
                                  |                                     |and we WARN
                                  |                                     |
                                  |                                     |btrfs_remove_block_group
          unpin_extent_range()    |                                     |
            Thread 3 removed the  |                                     |
            block group, so we BUG|                                     |
      
      There are other sequences involving SUPER_COMMITTED transactions that
      can cause a similar outcome.
      
      We could fix this by making relocation explicitly wait for unpinning,
      but there may be other cases that need it. Josef mentioned ENOSPC
      flushing and the free space cache inode as other potential victims.
      Rather than playing whack-a-mole, this fix is conservative and makes all
      commits not in fsync wait for all previous transactions, which is what
      the optimization intended.
      
      Fixes: d0c2f4fa ("btrfs: make concurrent fsyncs wait less when waiting for a transaction commit")
      CC: stable@vger.kernel.org # 5.15+
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NOmar Sandoval <osandov@fb.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      5fd76bf3
    • J
      btrfs: do not start relocation until in progress drops are done · b4be6aef
      Josef Bacik 提交于
      We hit a bug with a recovering relocation on mount for one of our file
      systems in production.  I reproduced this locally by injecting errors
      into snapshot delete with balance running at the same time.  This
      presented as an error while looking up an extent item
      
        WARNING: CPU: 5 PID: 1501 at fs/btrfs/extent-tree.c:866 lookup_inline_extent_backref+0x647/0x680
        CPU: 5 PID: 1501 Comm: btrfs-balance Not tainted 5.16.0-rc8+ #8
        RIP: 0010:lookup_inline_extent_backref+0x647/0x680
        RSP: 0018:ffffae0a023ab960 EFLAGS: 00010202
        RAX: 0000000000000001 RBX: 0000000000000000 RCX: 0000000000000000
        RDX: 0000000000000000 RSI: 000000000000000c RDI: 0000000000000000
        RBP: ffff943fd2a39b60 R08: 0000000000000000 R09: 0000000000000001
        R10: 0001434088152de0 R11: 0000000000000000 R12: 0000000001d05000
        R13: ffff943fd2a39b60 R14: ffff943fdb96f2a0 R15: ffff9442fc923000
        FS:  0000000000000000(0000) GS:ffff944e9eb40000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 00007f1157b1fca8 CR3: 000000010f092000 CR4: 0000000000350ee0
        Call Trace:
         <TASK>
         insert_inline_extent_backref+0x46/0xd0
         __btrfs_inc_extent_ref.isra.0+0x5f/0x200
         ? btrfs_merge_delayed_refs+0x164/0x190
         __btrfs_run_delayed_refs+0x561/0xfa0
         ? btrfs_search_slot+0x7b4/0xb30
         ? btrfs_update_root+0x1a9/0x2c0
         btrfs_run_delayed_refs+0x73/0x1f0
         ? btrfs_update_root+0x1a9/0x2c0
         btrfs_commit_transaction+0x50/0xa50
         ? btrfs_update_reloc_root+0x122/0x220
         prepare_to_merge+0x29f/0x320
         relocate_block_group+0x2b8/0x550
         btrfs_relocate_block_group+0x1a6/0x350
         btrfs_relocate_chunk+0x27/0xe0
         btrfs_balance+0x777/0xe60
         balance_kthread+0x35/0x50
         ? btrfs_balance+0xe60/0xe60
         kthread+0x16b/0x190
         ? set_kthread_struct+0x40/0x40
         ret_from_fork+0x22/0x30
         </TASK>
      
      Normally snapshot deletion and relocation are excluded from running at
      the same time by the fs_info->cleaner_mutex.  However if we had a
      pending balance waiting to get the ->cleaner_mutex, and a snapshot
      deletion was running, and then the box crashed, we would come up in a
      state where we have a half deleted snapshot.
      
      Again, in the normal case the snapshot deletion needs to complete before
      relocation can start, but in this case relocation could very well start
      before the snapshot deletion completes, as we simply add the root to the
      dead roots list and wait for the next time the cleaner runs to clean up
      the snapshot.
      
      Fix this by setting a bit on the fs_info if we have any DEAD_ROOT's that
      had a pending drop_progress key.  If they do then we know we were in the
      middle of the drop operation and set a flag on the fs_info.  Then
      balance can wait until this flag is cleared to start up again.
      
      If there are DEAD_ROOT's that don't have a drop_progress set then we're
      safe to start balance right away as we'll be properly protected by the
      cleaner_mutex.
      
      CC: stable@vger.kernel.org # 5.10+
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      b4be6aef
    • S
      btrfs: tree-checker: use u64 for item data end to avoid overflow · a6ab66eb
      Su Yue 提交于
      User reported there is an array-index-out-of-bounds access while
      mounting the crafted image:
      
        [350.411942 ] loop0: detected capacity change from 0 to 262144
        [350.427058 ] BTRFS: device fsid a62e00e8-e94e-4200-8217-12444de93c2e devid 1 transid 8 /dev/loop0 scanned by systemd-udevd (1044)
        [350.428564 ] BTRFS info (device loop0): disk space caching is enabled
        [350.428568 ] BTRFS info (device loop0): has skinny extents
        [350.429589 ]
        [350.429619 ] UBSAN: array-index-out-of-bounds in fs/btrfs/struct-funcs.c:161:1
        [350.429636 ] index 1048096 is out of range for type 'page *[16]'
        [350.429650 ] CPU: 0 PID: 9 Comm: kworker/u8:1 Not tainted 5.16.0-rc4
        [350.429652 ] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-1ubuntu1.1 04/01/2014
        [350.429653 ] Workqueue: btrfs-endio-meta btrfs_work_helper [btrfs]
        [350.429772 ] Call Trace:
        [350.429774 ]  <TASK>
        [350.429776 ]  dump_stack_lvl+0x47/0x5c
        [350.429780 ]  ubsan_epilogue+0x5/0x50
        [350.429786 ]  __ubsan_handle_out_of_bounds+0x66/0x70
        [350.429791 ]  btrfs_get_16+0xfd/0x120 [btrfs]
        [350.429832 ]  check_leaf+0x754/0x1a40 [btrfs]
        [350.429874 ]  ? filemap_read+0x34a/0x390
        [350.429878 ]  ? load_balance+0x175/0xfc0
        [350.429881 ]  validate_extent_buffer+0x244/0x310 [btrfs]
        [350.429911 ]  btrfs_validate_metadata_buffer+0xf8/0x100 [btrfs]
        [350.429935 ]  end_bio_extent_readpage+0x3af/0x850 [btrfs]
        [350.429969 ]  ? newidle_balance+0x259/0x480
        [350.429972 ]  end_workqueue_fn+0x29/0x40 [btrfs]
        [350.429995 ]  btrfs_work_helper+0x71/0x330 [btrfs]
        [350.430030 ]  ? __schedule+0x2fb/0xa40
        [350.430033 ]  process_one_work+0x1f6/0x400
        [350.430035 ]  ? process_one_work+0x400/0x400
        [350.430036 ]  worker_thread+0x2d/0x3d0
        [350.430037 ]  ? process_one_work+0x400/0x400
        [350.430038 ]  kthread+0x165/0x190
        [350.430041 ]  ? set_kthread_struct+0x40/0x40
        [350.430043 ]  ret_from_fork+0x1f/0x30
        [350.430047 ]  </TASK>
        [350.430047 ]
        [350.430077 ] BTRFS warning (device loop0): bad eb member start: ptr 0xffe20f4e start 20975616 member offset 4293005178 size 2
      
      btrfs check reports:
        corrupt leaf: root=3 block=20975616 physical=20975616 slot=1, unexpected
        item end, have 4294971193 expect 3897
      
      The first slot item offset is 4293005033 and the size is 1966160.
      In check_leaf, we use btrfs_item_end() to check item boundary versus
      extent_buffer data size. However, return type of btrfs_item_end() is u32.
      (u32)(4293005033 + 1966160) == 3897, overflow happens and the result 3897
      equals to leaf data size reasonably.
      
      Fix it by use u64 variable to store item data end in check_leaf() to
      avoid u32 overflow.
      
      This commit does solve the invalid memory access showed by the stack
      trace.  However, its metadata profile is DUP and another copy of the
      leaf is fine.  So the image can be mounted successfully. But when umount
      is called, the ASSERT btrfs_mark_buffer_dirty() will be triggered
      because the only node in extent tree has 0 item and invalid owner. It's
      solved by another commit
      "btrfs: check extent buffer owner against the owner rootid".
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=215299Reported-by: NWenqing Liu <wenqingliu0120@gmail.com>
      CC: stable@vger.kernel.org # 4.19+
      Signed-off-by: NSu Yue <l@damenly.su>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a6ab66eb
    • J
      btrfs: do not WARN_ON() if we have PageError set · a50e1fcb
      Josef Bacik 提交于
      Whenever we do any extent buffer operations we call
      assert_eb_page_uptodate() to complain loudly if we're operating on an
      non-uptodate page.  Our overnight tests caught this warning earlier this
      week
      
        WARNING: CPU: 1 PID: 553508 at fs/btrfs/extent_io.c:6849 assert_eb_page_uptodate+0x3f/0x50
        CPU: 1 PID: 553508 Comm: kworker/u4:13 Tainted: G        W         5.17.0-rc3+ #564
        Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014
        Workqueue: btrfs-cache btrfs_work_helper
        RIP: 0010:assert_eb_page_uptodate+0x3f/0x50
        RSP: 0018:ffffa961440a7c68 EFLAGS: 00010246
        RAX: 0017ffffc0002112 RBX: ffffe6e74453f9c0 RCX: 0000000000001000
        RDX: ffffe6e74467c887 RSI: ffffe6e74453f9c0 RDI: ffff8d4c5efc2fc0
        RBP: 0000000000000d56 R08: ffff8d4d4a224000 R09: 0000000000000000
        R10: 00015817fa9d1ef0 R11: 000000000000000c R12: 00000000000007b1
        R13: ffff8d4c5efc2fc0 R14: 0000000001500000 R15: 0000000001cb1000
        FS:  0000000000000000(0000) GS:ffff8d4dbbd00000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 00007ff31d3448d8 CR3: 0000000118be8004 CR4: 0000000000370ee0
        Call Trace:
      
         extent_buffer_test_bit+0x3f/0x70
         free_space_test_bit+0xa6/0xc0
         load_free_space_tree+0x1f6/0x470
         caching_thread+0x454/0x630
         ? rcu_read_lock_sched_held+0x12/0x60
         ? rcu_read_lock_sched_held+0x12/0x60
         ? rcu_read_lock_sched_held+0x12/0x60
         ? lock_release+0x1f0/0x2d0
         btrfs_work_helper+0xf2/0x3e0
         ? lock_release+0x1f0/0x2d0
         ? finish_task_switch.isra.0+0xf9/0x3a0
         process_one_work+0x26d/0x580
         ? process_one_work+0x580/0x580
         worker_thread+0x55/0x3b0
         ? process_one_work+0x580/0x580
         kthread+0xf0/0x120
         ? kthread_complete_and_exit+0x20/0x20
         ret_from_fork+0x1f/0x30
      
      This was partially fixed by c2e39305 ("btrfs: clear extent buffer
      uptodate when we fail to write it"), however all that fix did was keep
      us from finding extent buffers after a failed writeout.  It didn't keep
      us from continuing to use a buffer that we already had found.
      
      In this case we're searching the commit root to cache the block group,
      so we can start committing the transaction and switch the commit root
      and then start writing.  After the switch we can look up an extent
      buffer that hasn't been written yet and start processing that block
      group.  Then we fail to write that block out and clear Uptodate on the
      page, and then we start spewing these errors.
      
      Normally we're protected by the tree lock to a certain degree here.  If
      we read a block we have that block read locked, and we block the writer
      from locking the block before we submit it for the write.  However this
      isn't necessarily fool proof because the read could happen before we do
      the submit_bio and after we locked and unlocked the extent buffer.
      
      Also in this particular case we have path->skip_locking set, so that
      won't save us here.  We'll simply get a block that was valid when we
      read it, but became invalid while we were using it.
      
      What we really want is to catch the case where we've "read" a block but
      it's not marked Uptodate.  On read we ClearPageError(), so if we're
      !Uptodate and !Error we know we didn't do the right thing for reading
      the page.
      
      Fix this by checking !Uptodate && !Error, this way we will not complain
      if our buffer gets invalidated while we're using it, and we'll maintain
      the spirit of the check which is to make sure we have a fully in-cache
      block while we're messing with it.
      
      CC: stable@vger.kernel.org # 5.4+
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a50e1fcb
    • F
      btrfs: fix lost prealloc extents beyond eof after full fsync · d9947887
      Filipe Manana 提交于
      When doing a full fsync, if we have prealloc extents beyond (or at) eof,
      and the leaves that contain them were not modified in the current
      transaction, we end up not logging them. This results in losing those
      extents when we replay the log after a power failure, since the inode is
      truncated to the current value of the logged i_size.
      
      Just like for the fast fsync path, we need to always log all prealloc
      extents starting at or beyond i_size. The fast fsync case was fixed in
      commit 471d557a ("Btrfs: fix loss of prealloc extents past i_size
      after fsync log replay") but it missed the full fsync path. The problem
      exists since the very early days, when the log tree was added by
      commit e02119d5 ("Btrfs: Add a write ahead tree log to optimize
      synchronous operations").
      
      Example reproducer:
      
        $ mkfs.btrfs -f /dev/sdc
        $ mount /dev/sdc /mnt
      
        # Create our test file with many file extent items, so that they span
        # several leaves of metadata, even if the node/page size is 64K. Use
        # direct IO and not fsync/O_SYNC because it's both faster and it avoids
        # clearing the full sync flag from the inode - we want the fsync below
        # to trigger the slow full sync code path.
        $ xfs_io -f -d -c "pwrite -b 4K 0 16M" /mnt/foo
      
        # Now add two preallocated extents to our file without extending the
        # file's size. One right at i_size, and another further beyond, leaving
        # a gap between the two prealloc extents.
        $ xfs_io -c "falloc -k 16M 1M" /mnt/foo
        $ xfs_io -c "falloc -k 20M 1M" /mnt/foo
      
        # Make sure everything is durably persisted and the transaction is
        # committed. This makes all created extents to have a generation lower
        # than the generation of the transaction used by the next write and
        # fsync.
        sync
      
        # Now overwrite only the first extent, which will result in modifying
        # only the first leaf of metadata for our inode. Then fsync it. This
        # fsync will use the slow code path (inode full sync bit is set) because
        # it's the first fsync since the inode was created/loaded.
        $ xfs_io -c "pwrite 0 4K" -c "fsync" /mnt/foo
      
        # Extent list before power failure.
        $ xfs_io -c "fiemap -v" /mnt/foo
        /mnt/foo:
         EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
           0: [0..7]:          2178048..2178055     8   0x0
           1: [8..16383]:      26632..43007     16376   0x0
           2: [16384..32767]:  2156544..2172927 16384   0x0
           3: [32768..34815]:  2172928..2174975  2048 0x800
           4: [34816..40959]:  hole              6144
           5: [40960..43007]:  2174976..2177023  2048 0x801
      
        <power fail>
      
        # Mount fs again, trigger log replay.
        $ mount /dev/sdc /mnt
      
        # Extent list after power failure and log replay.
        $ xfs_io -c "fiemap -v" /mnt/foo
        /mnt/foo:
         EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
           0: [0..7]:          2178048..2178055     8   0x0
           1: [8..16383]:      26632..43007     16376   0x0
           2: [16384..32767]:  2156544..2172927 16384   0x1
      
        # The prealloc extents at file offsets 16M and 20M are missing.
      
      So fix this by calling btrfs_log_prealloc_extents() when we are doing a
      full fsync, so that we always log all prealloc extents beyond eof.
      
      A test case for fstests will follow soon.
      
      CC: stable@vger.kernel.org # 4.19+
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d9947887
    • Q
      btrfs: subpage: fix a wrong check on subpage->writers · c992fa1f
      Qu Wenruo 提交于
      [BUG]
      When looping btrfs/074 with 64K page size and 4K sectorsize, there is a
      low chance (1/50~1/100) to crash with the following ASSERT() triggered
      in btrfs_subpage_start_writer():
      
      	ret = atomic_add_return(nbits, &subpage->writers);
      	ASSERT(ret == nbits); <<< This one <<<
      
      [CAUSE]
      With more debugging output on the parameters of
      btrfs_subpage_start_writer(), it shows a very concerning error:
      
        ret=29 nbits=13 start=393216 len=53248
      
      For @nbits it's correct, but @ret which is the returned value from
      atomic_add_return(), it's not only larger than nbits, but also larger
      than max sectors per page value (for 64K page size and 4K sector size,
      it's 16).
      
      This indicates that some call sites are not properly decreasing the value.
      
      And that's exactly the case, in btrfs_page_unlock_writer(), due to the
      fact that we can have page locked either by lock_page() or
      process_one_page(), we have to check if the subpage has any writer.
      
      If no writers, it's locked by lock_page() and we only need to unlock it.
      
      But unfortunately the check for the writers are completely opposite:
      
      	if (atomic_read(&subpage->writers))
      		/* No writers, locked by plain lock_page() */
      		return unlock_page(page);
      
      We directly unlock the page if it has writers, which is the completely
      opposite what we want.
      
      Thankfully the affected call site is only limited to
      extent_write_locked_range(), so it's mostly affecting compressed write.
      
      [FIX]
      Just fix the wrong check condition to fix the bug.
      
      Fixes: e55a0de1 ("btrfs: rework page locking in __extent_writepage()")
      CC: stable@vger.kernel.org # 5.16
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      c992fa1f
    • G
      erofs: fix ztailpacking on > 4GiB filesystems · 22ba5e99
      Gao Xiang 提交于
      z_idataoff here is an absolute physical offset, so it should use
      erofs_off_t (64 bits at least). Otherwise, it'll get trimmed and
      cause the decompresion failure.
      
      Link: https://lore.kernel.org/r/20220222033118.20540-1-hsiangkao@linux.alibaba.com
      Fixes: ab92184f ("erofs: add on-disk compressed tail-packing inline support")
      Reviewed-by: NYue Hu <huyue2@yulong.com>
      Reviewed-by: NChao Yu <chao@kernel.org>
      Signed-off-by: NGao Xiang <hsiangkao@linux.alibaba.com>
      22ba5e99
    • K
      binfmt_elf: Avoid total_mapping_size for ET_EXEC · 439a8468
      Kees Cook 提交于
      Partially revert commit 5f501d55 ("binfmt_elf: reintroduce using
      MAP_FIXED_NOREPLACE"), which applied the ET_DYN "total_mapping_size"
      logic also to ET_EXEC.
      
      At least ia64 has ET_EXEC PT_LOAD segments that are not virtual-address
      contiguous (but _are_ file-offset contiguous). This would result in a
      giant mapping attempting to cover the entire span, including the virtual
      address range hole, and well beyond the size of the ELF file itself,
      causing the kernel to refuse to load it. For example:
      
      $ readelf -lW /usr/bin/gcc
      ...
      Program Headers:
        Type Offset   VirtAddr           PhysAddr           FileSiz  MemSiz   ...
      ...
        LOAD 0x000000 0x4000000000000000 0x4000000000000000 0x00b5a0 0x00b5a0 ...
        LOAD 0x00b5a0 0x600000000000b5a0 0x600000000000b5a0 0x0005ac 0x000710 ...
      ...
             ^^^^^^^^ ^^^^^^^^^^^^^^^^^^                    ^^^^^^^^ ^^^^^^^^
      
      File offset range     : 0x000000-0x00bb4c
      			0x00bb4c bytes
      
      Virtual address range : 0x4000000000000000-0x600000000000bcb0
      			0x200000000000bcb0 bytes
      
      Remove the total_mapping_size logic for ET_EXEC, which reduces the
      ET_EXEC MAP_FIXED_NOREPLACE coverage to only the first PT_LOAD (better
      than nothing), and retains it for ET_DYN.
      
      Ironically, this is the reverse of the problem that originally caused
      problems with MAP_FIXED_NOREPLACE: overlapping PT_LOAD segments. Future
      work could restore full coverage if load_elf_binary() were to perform
      mappings in a separate phase from the loading (where it could resolve
      both overlaps and holes).
      
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-mm@kvack.org
      Reported-by: Nmatoro <matoro_bugzilla_kernel@matoro.tk>
      Fixes: 5f501d55 ("binfmt_elf: reintroduce using MAP_FIXED_NOREPLACE")
      Link: https://lore.kernel.org/r/a3edd529-c42d-3b09-135c-7e98a15b150f@leemhuis.infoTested-by: Nmatoro <matoro_mailinglist_kernel@matoro.tk>
      Link: https://lore.kernel.org/lkml/ce8af9c13bcea9230c7689f3c1e0e2cd@matoro.tkTested-By: NJohn Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
      Link: https://lore.kernel.org/lkml/49182d0d-708b-4029-da5f-bc18603440a6@physik.fu-berlin.de
      Cc: stable@vger.kernel.org
      Signed-off-by: NKees Cook <keescook@chromium.org>
      439a8468
  5. 26 2月, 2022 1 次提交
  6. 24 2月, 2022 7 次提交
    • Q
      btrfs: reduce extent threshold for autodefrag · 558732df
      Qu Wenruo 提交于
      There is a big gap between inode_should_defrag() and autodefrag extent
      size threshold.  For inode_should_defrag() it has a flexible
      @small_write value. For compressed extent is 16K, and for non-compressed
      extent it's 64K.
      
      However for autodefrag extent size threshold, it's always fixed to the
      default value (256K).
      
      This means, the following write sequence will trigger autodefrag to
      defrag ranges which didn't trigger autodefrag:
      
        pwrite 0 8k
        sync
        pwrite 8k 128K
        sync
      
      The latter 128K write will also be considered as a defrag target (if
      other conditions are met). While only that 8K write is really
      triggering autodefrag.
      
      Such behavior can cause extra IO for autodefrag.
      
      Close the gap, by copying the @small_write value into inode_defrag, so
      that later autodefrag can use the same @small_write value which
      triggered autodefrag.
      
      With the existing transid value, this allows autodefrag really to scan
      the ranges which triggered autodefrag.
      
      Although this behavior change is mostly reducing the extent_thresh value
      for autodefrag, I believe in the future we should allow users to specify
      the autodefrag extent threshold through mount options, but that's an
      other problem to consider in the future.
      
      CC: stable@vger.kernel.org # 5.16+
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      558732df
    • Q
      btrfs: autodefrag: only scan one inode once · 26fbac25
      Qu Wenruo 提交于
      Although we have btrfs_requeue_inode_defrag(), for autodefrag we are
      still just exhausting all inode_defrag items in the tree.
      
      This means, it doesn't make much difference to requeue an inode_defrag,
      other than scan the inode from the beginning till its end.
      
      Change the behaviour to always scan from offset 0 of an inode, and till
      the end.
      
      By this we get the following benefit:
      
      - Straight-forward code
      
      - No more re-queue related check
      
      - Fewer members in inode_defrag
      
      We still keep the same btrfs_get_fs_root() and btrfs_iget() check for
      each loop, and added extra should_auto_defrag() check per-loop.
      
      Note: the patch needs to be backported and is intentionally written
      to minimize the diff size, code will be cleaned up later.
      
      CC: stable@vger.kernel.org # 5.16
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      26fbac25
    • Q
      btrfs: defrag: don't use merged extent map for their generation check · 199257a7
      Qu Wenruo 提交于
      For extent maps, if they are not compressed extents and are adjacent by
      logical addresses and file offsets, they can be merged into one larger
      extent map.
      
      Such merged extent map will have the higher generation of all the
      original ones.
      
      But this brings a problem for autodefrag, as it relies on accurate
      extent_map::generation to determine if one extent should be defragged.
      
      For merged extent maps, their higher generation can mark some older
      extents to be defragged while the original extent map doesn't meet the
      minimal generation threshold.
      
      Thus this will cause extra IO.
      
      So solve the problem, here we introduce a new flag, EXTENT_FLAG_MERGED,
      to indicate if the extent map is merged from one or more ems.
      
      And for autodefrag, if we find a merged extent map, and its generation
      meets the generation requirement, we just don't use this one, and go
      back to defrag_get_extent() to read extent maps from subvolume trees.
      
      This could cause more read IO, but should result less defrag data write,
      so in the long run it should be a win for autodefrag.
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      199257a7
    • Q
      btrfs: defrag: bring back the old file extent search behavior · d5633b0d
      Qu Wenruo 提交于
      For defrag, we don't really want to use btrfs_get_extent() to iterate
      all extent maps of an inode.
      
      The reasons are:
      
      - btrfs_get_extent() can merge extent maps
        And the result em has the higher generation of the two, causing defrag
        to mark unnecessary part of such merged large extent map.
      
        This in fact can result extra IO for autodefrag in v5.16+ kernels.
      
        However this patch is not going to completely solve the problem, as
        one can still using read() to trigger extent map reading, and got
        them merged.
      
        The completely solution for the extent map merging generation problem
        will come as an standalone fix.
      
      - btrfs_get_extent() caches the extent map result
        Normally it's fine, but for defrag the target range may not get
        another read/write for a long long time.
        Such cache would only increase the memory usage.
      
      - btrfs_get_extent() doesn't skip older extent map
        Unlike the old find_new_extent() which uses btrfs_search_forward() to
        skip the older subtree, thus it will pick up unnecessary extent maps.
      
      This patch will fix the regression by introducing defrag_get_extent() to
      replace the btrfs_get_extent() call.
      
      This helper will:
      
      - Not cache the file extent we found
        It will search the file extent and manually convert it to em.
      
      - Use btrfs_search_forward() to skip entire ranges which is modified in
        the past
      
      This should reduce the IO for autodefrag.
      Reported-by: NFilipe Manana <fdmanana@suse.com>
      Fixes: 7b508037 ("btrfs: defrag: use defrag_one_cluster() to implement btrfs_defrag_file()")
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d5633b0d
    • Q
      btrfs: defrag: remove an ambiguous condition for rejection · 550f133f
      Qu Wenruo 提交于
      From the very beginning of btrfs defrag, there is a check to reject
      extents which meet both conditions:
      
      - Physically adjacent
      
        We may want to defrag physically adjacent extents to reduce the number
        of extents or the size of subvolume tree.
      
      - Larger than 128K
      
        This may be there for compressed extents, but unfortunately 128K is
        exactly the max capacity for compressed extents.
        And the check is > 128K, thus it never rejects compressed extents.
      
        Furthermore, the compressed extent capacity bug is fixed by previous
        patch, there is no reason for that check anymore.
      
      The original check has a very small ranges to reject (the target extent
      size is > 128K, and default extent threshold is 256K), and for
      compressed extent it doesn't work at all.
      
      So it's better just to remove the rejection, and allow us to defrag
      physically adjacent extents.
      
      CC: stable@vger.kernel.org # 5.16
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      550f133f
    • Q
      btrfs: defrag: don't defrag extents which are already at max capacity · 979b25c3
      Qu Wenruo 提交于
      [BUG]
      For compressed extents, defrag ioctl will always try to defrag any
      compressed extents, wasting not only IO but also CPU time to
      compress/decompress:
      
         mkfs.btrfs -f $DEV
         mount -o compress $DEV $MNT
         xfs_io -f -c "pwrite -S 0xab 0 128K" $MNT/foobar
         sync
         xfs_io -f -c "pwrite -S 0xcd 128K 128K" $MNT/foobar
         sync
         echo "=== before ==="
         xfs_io -c "fiemap -v" $MNT/foobar
         btrfs filesystem defrag $MNT/foobar
         sync
         echo "=== after ==="
         xfs_io -c "fiemap -v" $MNT/foobar
      
      Then it shows the 2 128K extents just get COW for no extra benefit, with
      extra IO/CPU spent:
      
          === before ===
          /mnt/btrfs/file1:
           EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
             0: [0..255]:        26624..26879       256   0x8
             1: [256..511]:      26632..26887       256   0x9
          === after ===
          /mnt/btrfs/file1:
           EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
             0: [0..255]:        26640..26895       256   0x8
             1: [256..511]:      26648..26903       256   0x9
      
      This affects not only v5.16 (after the defrag rework), but also v5.15
      (before the defrag rework).
      
      [CAUSE]
      From the very beginning, btrfs defrag never checks if one extent is
      already at its max capacity (128K for compressed extents, 128M
      otherwise).
      
      And the default extent size threshold is 256K, which is already beyond
      the compressed extent max size.
      
      This means, by default btrfs defrag ioctl will mark all compressed
      extent which is not adjacent to a hole/preallocated range for defrag.
      
      [FIX]
      Introduce a helper to grab the maximum extent size, and then in
      defrag_collect_targets() and defrag_check_next_extent(), reject extents
      which are already at their max capacity.
      Reported-by: NFilipe Manana <fdmanana@suse.com>
      CC: stable@vger.kernel.org # 5.16
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      979b25c3
    • Q
      btrfs: defrag: don't try to merge regular extents with preallocated extents · 7093f152
      Qu Wenruo 提交于
      [BUG]
      With older kernels (before v5.16), btrfs will defrag preallocated extents.
      While with newer kernels (v5.16 and newer) btrfs will not defrag
      preallocated extents, but it will defrag the extent just before the
      preallocated extent, even it's just a single sector.
      
      This can be exposed by the following small script:
      
      	mkfs.btrfs -f $dev > /dev/null
      
      	mount $dev $mnt
      	xfs_io -f -c "pwrite 0 4k" -c sync -c "falloc 4k 16K" $mnt/file
      	xfs_io -c "fiemap -v" $mnt/file
      	btrfs fi defrag $mnt/file
      	sync
      	xfs_io -c "fiemap -v" $mnt/file
      
      The output looks like this on older kernels:
      
      /mnt/btrfs/file:
       EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
         0: [0..7]:          26624..26631         8   0x0
         1: [8..39]:         26632..26663        32 0x801
      /mnt/btrfs/file:
       EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
         0: [0..39]:         26664..26703        40   0x1
      
      Which defrags the single sector along with the preallocated extent, and
      replace them with an regular extent into a new location (caused by data
      COW).
      This wastes most of the data IO just for the preallocated range.
      
      On the other hand, v5.16 is slightly better:
      
      /mnt/btrfs/file:
       EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
         0: [0..7]:          26624..26631         8   0x0
         1: [8..39]:         26632..26663        32 0x801
      /mnt/btrfs/file:
       EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
         0: [0..7]:          26664..26671         8   0x0
         1: [8..39]:         26632..26663        32 0x801
      
      The preallocated range is not defragged, but the sector before it still
      gets defragged, which has no need for it.
      
      [CAUSE]
      One of the function reused by the old and new behavior is
      defrag_check_next_extent(), it will determine if we should defrag
      current extent by checking the next one.
      
      It only checks if the next extent is a hole or inlined, but it doesn't
      check if it's preallocated.
      
      On the other hand, out of the function, both old and new kernel will
      reject preallocated extents.
      
      Such inconsistent behavior causes above behavior.
      
      [FIX]
      - Also check if next extent is preallocated
        If so, don't defrag current extent.
      
      - Add comments for each branch why we reject the extent
      
      This will reduce the IO caused by defrag ioctl and autodefrag.
      
      CC: stable@vger.kernel.org # 5.16
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      7093f152
  7. 23 2月, 2022 2 次提交
    • C
      configfs: fix a race in configfs_{,un}register_subsystem() · 84ec758f
      ChenXiaoSong 提交于
      When configfs_register_subsystem() or configfs_unregister_subsystem()
      is executing link_group() or unlink_group(),
      it is possible that two processes add or delete list concurrently.
      Some unfortunate interleavings of them can cause kernel panic.
      
      One of cases is:
      A --> B --> C --> D
      A <-- B <-- C <-- D
      
           delete list_head *B        |      delete list_head *C
      --------------------------------|-----------------------------------
      configfs_unregister_subsystem   |   configfs_unregister_subsystem
        unlink_group                  |     unlink_group
          unlink_obj                  |       unlink_obj
            list_del_init             |         list_del_init
              __list_del_entry        |           __list_del_entry
                __list_del            |             __list_del
                  // next == C        |
                  next->prev = prev   |
                                      |               next->prev = prev
                  prev->next = next   |
                                      |                 // prev == B
                                      |                 prev->next = next
      
      Fix this by adding mutex when calling link_group() or unlink_group(),
      but parent configfs_subsystem is NULL when config_item is root.
      So I create a mutex configfs_subsystem_mutex.
      
      Fixes: 7063fbf2 ("[PATCH] configfs: User-driven configuration filesystem")
      Signed-off-by: NChenXiaoSong <chenxiaosong2@huawei.com>
      Signed-off-by: NLaibin Qiu <qiulaibin@huawei.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      84ec758f
    • D
      io_uring: disallow modification of rsrc_data during quiesce · 80912cef
      Dylan Yudaken 提交于
      io_rsrc_ref_quiesce will unlock the uring while it waits for references to
      the io_rsrc_data to be killed.
      There are other places to the data that might add references to data via
      calls to io_rsrc_node_switch.
      There is a race condition where this reference can be added after the
      completion has been signalled. At this point the io_rsrc_ref_quiesce call
      will wake up and relock the uring, assuming the data is unused and can be
      freed - although it is actually being used.
      
      To fix this check in io_rsrc_ref_quiesce if a resource has been revived.
      
      Reported-by: syzbot+ca8bf833622a1662745b@syzkaller.appspotmail.com
      Cc: stable@vger.kernel.org
      Signed-off-by: NDylan Yudaken <dylany@fb.com>
      Link: https://lore.kernel.org/r/20220222161751.995746-1-dylany@fb.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      80912cef
  8. 21 2月, 2022 1 次提交
  9. 18 2月, 2022 1 次提交
  10. 17 2月, 2022 2 次提交
  11. 16 2月, 2022 2 次提交
    • Q
      btrfs: defrag: allow defrag_one_cluster() to skip large extent which is not a target · 966d879b
      Qu Wenruo 提交于
      In the rework of btrfs_defrag_file(), we always call
      defrag_one_cluster() and increase the offset by cluster size, which is
      only 256K.
      
      But there are cases where we have a large extent (e.g. 128M) which
      doesn't need to be defragged at all.
      
      Before the refactor, we can directly skip the range, but now we have to
      scan that extent map again and again until the cluster moves after the
      non-target extent.
      
      Fix the problem by allow defrag_one_cluster() to increase
      btrfs_defrag_ctrl::last_scanned to the end of an extent, if and only if
      the last extent of the cluster is not a target.
      
      The test script looks like this:
      
      	mkfs.btrfs -f $dev > /dev/null
      
      	mount $dev $mnt
      
      	# As btrfs ioctl uses 32M as extent_threshold
      	xfs_io -f -c "pwrite 0 64M" $mnt/file1
      	sync
      	# Some fragemented range to defrag
      	xfs_io -s -c "pwrite 65548k 4k" \
      		  -c "pwrite 65544k 4k" \
      		  -c "pwrite 65540k 4k" \
      		  -c "pwrite 65536k 4k" \
      		  $mnt/file1
      	sync
      
      	echo "=== before ==="
      	xfs_io -c "fiemap -v" $mnt/file1
      	echo "=== after ==="
      	btrfs fi defrag $mnt/file1
      	sync
      	xfs_io -c "fiemap -v" $mnt/file1
      	umount $mnt
      
      With extra ftrace put into defrag_one_cluster(), before the patch it
      would result tons of loops:
      
      (As defrag_one_cluster() is inlined, the function name is its caller)
      
        btrfs-126062  [005] .....  4682.816026: btrfs_defrag_file: r/i=5/257 start=0 len=262144
        btrfs-126062  [005] .....  4682.816027: btrfs_defrag_file: r/i=5/257 start=262144 len=262144
        btrfs-126062  [005] .....  4682.816028: btrfs_defrag_file: r/i=5/257 start=524288 len=262144
        btrfs-126062  [005] .....  4682.816028: btrfs_defrag_file: r/i=5/257 start=786432 len=262144
        btrfs-126062  [005] .....  4682.816028: btrfs_defrag_file: r/i=5/257 start=1048576 len=262144
        ...
        btrfs-126062  [005] .....  4682.816043: btrfs_defrag_file: r/i=5/257 start=67108864 len=262144
      
      But with this patch there will be just one loop, then directly to the
      end of the extent:
      
        btrfs-130471  [014] .....  5434.029558: defrag_one_cluster: r/i=5/257 start=0 len=262144
        btrfs-130471  [014] .....  5434.029559: defrag_one_cluster: r/i=5/257 start=67108864 len=16384
      
      CC: stable@vger.kernel.org # 5.16
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      966d879b
    • D
      btrfs: prevent copying too big compressed lzo segment · 741b23a9
      Dāvis Mosāns 提交于
      Compressed length can be corrupted to be a lot larger than memory
      we have allocated for buffer.
      This will cause memcpy in copy_compressed_segment to write outside
      of allocated memory.
      
      This mostly results in stuck read syscall but sometimes when using
      btrfs send can get #GP
      
        kernel: general protection fault, probably for non-canonical address 0x841551d5c1000: 0000 [#1] PREEMPT SMP NOPTI
        kernel: CPU: 17 PID: 264 Comm: kworker/u256:7 Tainted: P           OE     5.17.0-rc2-1 #12
        kernel: Workqueue: btrfs-endio btrfs_work_helper [btrfs]
        kernel: RIP: 0010:lzo_decompress_bio (./include/linux/fortify-string.h:225 fs/btrfs/lzo.c:322 fs/btrfs/lzo.c:394) btrfs
        Code starting with the faulting instruction
        ===========================================
           0:*  48 8b 06                mov    (%rsi),%rax              <-- trapping instruction
           3:   48 8d 79 08             lea    0x8(%rcx),%rdi
           7:   48 83 e7 f8             and    $0xfffffffffffffff8,%rdi
           b:   48 89 01                mov    %rax,(%rcx)
           e:   44 89 f0                mov    %r14d,%eax
          11:   48 8b 54 06 f8          mov    -0x8(%rsi,%rax,1),%rdx
        kernel: RSP: 0018:ffffb110812efd50 EFLAGS: 00010212
        kernel: RAX: 0000000000001000 RBX: 000000009ca264c8 RCX: ffff98996e6d8ff8
        kernel: RDX: 0000000000000064 RSI: 000841551d5c1000 RDI: ffffffff9500435d
        kernel: RBP: ffff989a3be856c0 R08: 0000000000000000 R09: 0000000000000000
        kernel: R10: 0000000000000000 R11: 0000000000001000 R12: ffff98996e6d8000
        kernel: R13: 0000000000000008 R14: 0000000000001000 R15: 000841551d5c1000
        kernel: FS:  0000000000000000(0000) GS:ffff98a09d640000(0000) knlGS:0000000000000000
        kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        kernel: CR2: 00001e9f984d9ea8 CR3: 000000014971a000 CR4: 00000000003506e0
        kernel: Call Trace:
        kernel:  <TASK>
        kernel: end_compressed_bio_read (fs/btrfs/compression.c:104 fs/btrfs/compression.c:1363 fs/btrfs/compression.c:323) btrfs
        kernel: end_workqueue_fn (fs/btrfs/disk-io.c:1923) btrfs
        kernel: btrfs_work_helper (fs/btrfs/async-thread.c:326) btrfs
        kernel: process_one_work (./arch/x86/include/asm/jump_label.h:27 ./include/linux/jump_label.h:212 ./include/trace/events/workqueue.h:108 kernel/workqueue.c:2312)
        kernel: worker_thread (./include/linux/list.h:292 kernel/workqueue.c:2455)
        kernel: ? process_one_work (kernel/workqueue.c:2397)
        kernel: kthread (kernel/kthread.c:377)
        kernel: ? kthread_complete_and_exit (kernel/kthread.c:332)
        kernel: ret_from_fork (arch/x86/entry/entry_64.S:301)
        kernel:  </TASK>
      
      CC: stable@vger.kernel.org # 4.9+
      Signed-off-by: NDāvis Mosāns <davispuh@gmail.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      741b23a9
  12. 15 2月, 2022 3 次提交
    • E
      io_uring: add a schedule point in io_add_buffers() · f240762f
      Eric Dumazet 提交于
      Looping ~65535 times doing kmalloc() calls can trigger soft lockups,
      especially with DEBUG features (like KASAN).
      
      [  253.536212] watchdog: BUG: soft lockup - CPU#64 stuck for 26s! [b219417889:12575]
      [  253.544433] Modules linked in: vfat fat i2c_mux_pca954x i2c_mux spidev cdc_acm xhci_pci xhci_hcd sha3_generic gq(O)
      [  253.544451] CPU: 64 PID: 12575 Comm: b219417889 Tainted: G S         O      5.17.0-smp-DEV #801
      [  253.544457] RIP: 0010:kernel_text_address (./include/asm-generic/sections.h:192 ./include/linux/kallsyms.h:29 kernel/extable.c:67 kernel/extable.c:98)
      [  253.544464] Code: 0f 93 c0 48 c7 c1 e0 63 d7 a4 48 39 cb 0f 92 c1 20 c1 0f b6 c1 5b 5d c3 90 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 53 48 89 fb <48> c7 c0 00 00 80 a0 41 be 01 00 00 00 48 39 c7 72 0c 48 c7 c0 40
      [  253.544468] RSP: 0018:ffff8882d8baf4c0 EFLAGS: 00000246
      [  253.544471] RAX: 1ffff1105b175e00 RBX: ffffffffa13ef09a RCX: 00000000a13ef001
      [  253.544474] RDX: ffffffffa13ef09a RSI: ffff8882d8baf558 RDI: ffffffffa13ef09a
      [  253.544476] RBP: ffff8882d8baf4d8 R08: ffff8882d8baf5e0 R09: 0000000000000004
      [  253.544479] R10: ffff8882d8baf5e8 R11: ffffffffa0d59a50 R12: ffff8882eab20380
      [  253.544481] R13: ffffffffa0d59a50 R14: dffffc0000000000 R15: 1ffff1105b175eb0
      [  253.544483] FS:  00000000016d3380(0000) GS:ffff88af48c00000(0000) knlGS:0000000000000000
      [  253.544486] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  253.544488] CR2: 00000000004af0f0 CR3: 00000002eabfa004 CR4: 00000000003706e0
      [  253.544491] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  253.544492] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [  253.544494] Call Trace:
      [  253.544496]  <TASK>
      [  253.544498] ? io_queue_sqe (fs/io_uring.c:7143)
      [  253.544505] __kernel_text_address (kernel/extable.c:78)
      [  253.544508] unwind_get_return_address (arch/x86/kernel/unwind_frame.c:19)
      [  253.544514] arch_stack_walk (arch/x86/kernel/stacktrace.c:27)
      [  253.544517] ? io_queue_sqe (fs/io_uring.c:7143)
      [  253.544521] stack_trace_save (kernel/stacktrace.c:123)
      [  253.544527] ____kasan_kmalloc (mm/kasan/common.c:39 mm/kasan/common.c:45 mm/kasan/common.c:436 mm/kasan/common.c:515)
      [  253.544531] ? ____kasan_kmalloc (mm/kasan/common.c:39 mm/kasan/common.c:45 mm/kasan/common.c:436 mm/kasan/common.c:515)
      [  253.544533] ? __kasan_kmalloc (mm/kasan/common.c:524)
      [  253.544535] ? kmem_cache_alloc_trace (./include/linux/kasan.h:270 mm/slab.c:3567)
      [  253.544541] ? io_issue_sqe (fs/io_uring.c:4556 fs/io_uring.c:4589 fs/io_uring.c:6828)
      [  253.544544] ? __io_queue_sqe (fs/io_uring.c:?)
      [  253.544551] __kasan_kmalloc (mm/kasan/common.c:524)
      [  253.544553] kmem_cache_alloc_trace (./include/linux/kasan.h:270 mm/slab.c:3567)
      [  253.544556] ? io_issue_sqe (fs/io_uring.c:4556 fs/io_uring.c:4589 fs/io_uring.c:6828)
      [  253.544560] io_issue_sqe (fs/io_uring.c:4556 fs/io_uring.c:4589 fs/io_uring.c:6828)
      [  253.544564] ? __kasan_slab_alloc (mm/kasan/common.c:45 mm/kasan/common.c:436 mm/kasan/common.c:469)
      [  253.544567] ? __kasan_slab_alloc (mm/kasan/common.c:39 mm/kasan/common.c:45 mm/kasan/common.c:436 mm/kasan/common.c:469)
      [  253.544569] ? kmem_cache_alloc_bulk (mm/slab.h:732 mm/slab.c:3546)
      [  253.544573] ? __io_alloc_req_refill (fs/io_uring.c:2078)
      [  253.544578] ? io_submit_sqes (fs/io_uring.c:7441)
      [  253.544581] ? __se_sys_io_uring_enter (fs/io_uring.c:10154 fs/io_uring.c:10096)
      [  253.544584] ? __x64_sys_io_uring_enter (fs/io_uring.c:10096)
      [  253.544587] ? do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80)
      [  253.544590] ? entry_SYSCALL_64_after_hwframe (??:?)
      [  253.544596] __io_queue_sqe (fs/io_uring.c:?)
      [  253.544600] io_queue_sqe (fs/io_uring.c:7143)
      [  253.544603] io_submit_sqe (fs/io_uring.c:?)
      [  253.544608] io_submit_sqes (fs/io_uring.c:?)
      [  253.544612] __se_sys_io_uring_enter (fs/io_uring.c:10154 fs/io_uring.c:10096)
      [  253.544616] __x64_sys_io_uring_enter (fs/io_uring.c:10096)
      [  253.544619] do_syscall_64 (arch/x86/entry/common.c:50 arch/x86/entry/common.c:80)
      [  253.544623] entry_SYSCALL_64_after_hwframe (??:?)
      
      Fixes: ddf0322d ("io_uring: add IORING_OP_PROVIDE_BUFFERS")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Pavel Begunkov <asml.silence@gmail.com>
      Cc: io-uring <io-uring@vger.kernel.org>
      Reported-by: Nsyzbot <syzkaller@googlegroups.com>
      Link: https://lore.kernel.org/r/20220215041003.2394784-1-eric.dumazet@gmail.comSigned-off-by: NJens Axboe <axboe@kernel.dk>
      f240762f
    • T
      NFS: LOOKUP_DIRECTORY is also ok with symlinks · e0caaf75
      Trond Myklebust 提交于
      Commit ac795161 (NFSv4: Handle case where the lookup of a directory
      fails) [1], part of Linux since 5.17-rc2, introduced a regression, where
      a symbolic link on an NFS mount to a directory on another NFS does not
      resolve(?) the first time it is accessed:
      Reported-by: NPaul Menzel <pmenzel@molgen.mpg.de>
      Fixes: ac795161 ("NFSv4: Handle case where the lookup of a directory fails")
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      Tested-by: NDonald Buczek <buczek@molgen.mpg.de>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      e0caaf75
    • T
      NFS: Remove an incorrect revalidation in nfs4_update_changeattr_locked() · 9d047bf6
      Trond Myklebust 提交于
      In nfs4_update_changeattr_locked(), we don't need to set the
      NFS_INO_REVAL_PAGECACHE flag, because we already know the value of the
      change attribute, and we're already flagging the size. In fact, this
      forces us to revalidate the change attribute a second time for no good
      reason.
      This extra flag appears to have been introduced as part of the xattr
      feature, when update_changeattr_locked() was converted for use by the
      xattr code.
      
      Fixes: 1b523ca9 ("nfs: modify update_changeattr to deal with regular files")
      Signed-off-by: NTrond Myklebust <trond.myklebust@hammerspace.com>
      Signed-off-by: NAnna Schumaker <Anna.Schumaker@Netapp.com>
      9d047bf6
  13. 14 2月, 2022 2 次提交