1. 26 9月, 2022 18 次提交
    • J
      btrfs: remove lock protection for BLOCK_GROUP_FLAG_TO_COPY · 9283b9e0
      Josef Bacik 提交于
      We use this during device replace for zoned devices, we were simply
      taking the lock because it was in a bit field and we needed the lock to
      be safe with other modifications in the bitfield.  With the bit helpers
      we no longer require that locking.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      9283b9e0
    • J
      btrfs: convert block group bit field to use bit helpers · 3349b57f
      Josef Bacik 提交于
      We use a bit field in the btrfs_block_group for different flags, however
      this is awkward because we have to hold the block_group->lock for any
      modification of any of these fields, and makes the code clunky for a few
      of these flags.  Convert these to a properly flags setup so we can
      utilize the bit helpers.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      3349b57f
    • J
      btrfs: handle space_info setting of bg in btrfs_add_bg_to_space_info · 723de71d
      Josef Bacik 提交于
      We previously had the pattern of
      
      	btrfs_update_space_info(all, the, bg, fields, &space_info);
      	link_block_group(bg);
      	bg->space_info = space_info;
      
      Now that we're passing the bg into btrfs_add_bg_to_space_info we can do
      the linking in that function, transforming this to simply
      
      	btrfs_add_bg_to_space_info(fs_info, bg);
      
      and put the link_block_group() and bg->space_info assignment directly in
      btrfs_add_bg_to_space_info.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      723de71d
    • J
      btrfs: simplify arguments of btrfs_update_space_info and rename · 9d4b0a12
      Josef Bacik 提交于
      This function has grown a bunch of new arguments, and it just boils down
      to passing in all the block group fields as arguments.  Simplify this by
      passing in the block group itself and updating the space_info fields
      based on the block group fields directly.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      9d4b0a12
    • J
      btrfs: use btrfs_fs_closing for background bg work · 2f12741f
      Josef Bacik 提交于
      For both unused bg deletion and async balance work we'll happily run if
      the fs is closing.  However I want to move these to their own worker
      thread, and they can be long running jobs, so add a check to see if
      we're closing and simply bail.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2f12741f
    • O
      btrfs: rename btrfs_insert_file_extent() to btrfs_insert_hole_extent() · d1f68ba0
      Omar Sandoval 提交于
      btrfs_insert_file_extent() is only ever used to insert holes, so rename
      it and remove the redundant parameters.
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NOmar Sandoval <osandov@osandov.com>
      Signed-off-by: NSweet Tea Dorminy <sweettea-kernel@dorminy.me>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d1f68ba0
    • D
      btrfs: sysfs: use sysfs_streq for string matching · 7f298f22
      David Sterba 提交于
      We have own string matching helper that duplicates what sysfs_streq
      does, with a slight difference that it skips initial whitespace. So far
      this is used for the drive allocation policy. The initial whitespace
      of written sysfs values should be rather discouraged and we should use a
      standard helper.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      7f298f22
    • Q
      btrfs: scrub: try to fix super block errors · f9eab5f0
      Qu Wenruo 提交于
      [BUG]
      The following script shows that, although scrub can detect super block
      errors, it never tries to fix it:
      
      	mkfs.btrfs -f -d raid1 -m raid1 $dev1 $dev2
      	xfs_io -c "pwrite 67108864 4k" $dev2
      
      	mount $dev1 $mnt
      	btrfs scrub start -B $dev2
      	btrfs scrub start -Br $dev2
      	umount $mnt
      
      The first scrub reports the super error correctly:
      
        scrub done for f3289218-abd3-41ac-a630-202f766c0859
        Scrub started:    Tue Aug  2 14:44:11 2022
        Status:           finished
        Duration:         0:00:00
        Total to scrub:   1.26GiB
        Rate:             0.00B/s
        Error summary:    super=1
          Corrected:      0
          Uncorrectable:  0
          Unverified:     0
      
      But the second read-only scrub still reports the same super error:
      
        Scrub started:    Tue Aug  2 14:44:11 2022
        Status:           finished
        Duration:         0:00:00
        Total to scrub:   1.26GiB
        Rate:             0.00B/s
        Error summary:    super=1
          Corrected:      0
          Uncorrectable:  0
          Unverified:     0
      
      [CAUSE]
      The comments already shows that super block can be easily fixed by
      committing a transaction:
      
      	/*
      	 * If we find an error in a super block, we just report it.
      	 * They will get written with the next transaction commit
      	 * anyway
      	 */
      
      But the truth is, such assumption is not always true, and since scrub
      should try to repair every error it found (except for read-only scrub),
      we should really actively commit a transaction to fix this.
      
      [FIX]
      Just commit a transaction if we found any super block errors, after
      everything else is done.
      
      We cannot do this just after scrub_supers(), as
      btrfs_commit_transaction() will try to pause and wait for the running
      scrub, thus we can not call it with scrub_lock hold.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f9eab5f0
    • Q
      btrfs: scrub: properly report super block errors in system log · e69bf81c
      Qu Wenruo 提交于
      [PROBLEM]
      
      Unlike data/metadata corruption, if scrub detected some error in the
      super block, the only error message is from the updated device status:
      
        BTRFS info (device dm-1): scrub: started on devid 2
        BTRFS error (device dm-1): bdev /dev/mapper/test-scratch2 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
        BTRFS info (device dm-1): scrub: finished on devid 2 with status: 0
      
      This is not helpful at all.
      
      [CAUSE]
      Unlike data/metadata error reporting, there is no visible report in
      kernel dmesg to report supper block errors.
      
      In fact, return value of scrub_checksum_super() is intentionally
      skipped, thus scrub_handle_errored_block() will never be called for
      super blocks.
      
      [FIX]
      Make super block errors to output an error message, now the full
      dmesg would looks like this:
      
        BTRFS info (device dm-1): scrub: started on devid 2
        BTRFS warning (device dm-1): super block error on device /dev/mapper/test-scratch2, physical 67108864
        BTRFS error (device dm-1): bdev /dev/mapper/test-scratch2 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
        BTRFS info (device dm-1): scrub: finished on devid 2 with status: 0
        BTRFS info (device dm-1): scrub: started on devid 2
      
      This fix involves:
      
      - Move the super_errors reporting to scrub_handle_errored_block()
        This allows the device status message to show after the super block
        error message.
        But now we no longer distinguish super block corruption and generation
        mismatch, now all counted as corruption.
      
      - Properly check the return value from scrub_checksum_super()
      - Add extra super block error reporting for scrub_print_warning().
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e69bf81c
    • A
      btrfs: fix alignment of VMA for memory mapped files on THP · b0c58223
      Alexander Zhu 提交于
      With CONFIG_READ_ONLY_THP_FOR_FS, the Linux kernel supports using THPs for
      read-only mmapped files, such as shared libraries. However, the kernel
      makes no attempt to actually align those mappings on 2MB boundaries,
      which makes it impossible to use those THPs most of the time. This issue
      applies to general file mapping THP as well as existing setups using
      CONFIG_READ_ONLY_THP_FOR_FS. This is easily fixed by using
      thp_get_unmapped_area for the unmapped_area function in btrfs, which
      is what ext2, ext4, fuse, and xfs all use.
      
      Initially btrfs had been left out in commit 8c07fc452ac0 ("btrfs: fix
      alignment of VMA for memory mapped files on THP") as btrfs does not support
      DAX. However, commit 1854bc6e ("mm/readahead: Align file mappings
      for non-DAX") removed the DAX requirement. We should now be able to call
      thp_get_unmapped_area() for btrfs.
      
      The problem can be seen in /proc/PID/smaps where THPeligible is set to 0
      on mappings to eligible shared object files as shown below.
      
      Before this patch:
      
        7fc6a7e18000-7fc6a80cc000 r-xp 00000000 00:1e 199856
        /usr/lib64/libcrypto.so.1.1.1k
        Size:               2768 kB
        THPeligible:    0
        VmFlags: rd ex mr mw me
      
      With this patch the library is mapped at a 2MB aligned address:
      
        fbdfe200000-7fbdfe4b4000 r-xp 00000000 00:1e 199856
        /usr/lib64/libcrypto.so.1.1.1k
        Size:               2768 kB
        THPeligible:    1
        VmFlags: rd ex mr mw me
      
      This fixes the alignment of VMAs for any mmap of a file that has the
      rd and ex permissions and size >= 2MB. The VMA alignment and
      THPeligible field for anonymous memory is handled separately and
      is thus not effected by this change.
      
      CC: stable@vger.kernel.org # 5.18+
      Signed-off-by: NAlexander Zhu <alexlzhu@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      b0c58223
    • I
      btrfs: add lockdep annotations for the ordered extents wait event · 5f4403e1
      Ioannis Angelakopoulos 提交于
      This wait event is very similar to the pending ordered wait event in the
      sense that it occurs in a different context than the condition signaling
      for the event. The signaling occurs in btrfs_remove_ordered_extent()
      while the wait event is implemented in btrfs_start_ordered_extent() in
      fs/btrfs/ordered-data.c
      
      However, in this case a thread must not acquire the lockdep map for the
      ordered extents wait event when the ordered extent is related to a free
      space inode. That is because lockdep creates dependencies between locks
      acquired both in execution paths related to normal inodes and paths
      related to free space inodes, thus leading to false positives.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NIoannis Angelakopoulos <iangelak@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      5f4403e1
    • I
      btrfs: change the lockdep class of free space inode's invalidate_lock · 9d7464c8
      Ioannis Angelakopoulos 提交于
      Reinitialize the class of the lockdep map for struct inode's
      mapping->invalidate_lock in load_free_space_cache() function in
      fs/btrfs/free-space-cache.c. This will prevent lockdep from producing
      false positives related to execution paths that make use of free space
      inodes and paths that make use of normal inodes.
      
      Specifically, with this change lockdep will create separate lock
      dependencies that include the invalidate_lock, in the case that free
      space inodes are used and in the case that normal inodes are used.
      
      The lockdep class for this lock was first initialized in
      inode_init_always() in fs/inode.c.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NIoannis Angelakopoulos <iangelak@fb.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      9d7464c8
    • I
      btrfs: add lockdep annotations for pending_ordered wait event · 8b53779e
      Ioannis Angelakopoulos 提交于
      In contrast to the num_writers and num_extwriters wait events, the
      condition for the pending ordered wait event is signaled in a different
      context from the wait event itself. The condition signaling occurs in
      btrfs_remove_ordered_extent() in fs/btrfs/ordered-data.c while the wait
      event is implemented in btrfs_commit_transaction() in
      fs/btrfs/transaction.c
      
      Thus the thread signaling the condition has to acquire the lockdep map
      as a reader at the start of btrfs_remove_ordered_extent() and release it
      after it has signaled the condition. In this case some dependencies
      might be left out due to the placement of the annotation, but it is
      better than no annotation at all.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NIoannis Angelakopoulos <iangelak@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      8b53779e
    • I
      btrfs: add lockdep annotations for transaction states wait events · 3e738c53
      Ioannis Angelakopoulos 提交于
      Add lockdep annotations for the transaction states that have wait
      events;
      
        1) TRANS_STATE_COMMIT_START
        2) TRANS_STATE_UNBLOCKED
        3) TRANS_STATE_SUPER_COMMITTED
        4) TRANS_STATE_COMPLETED
      
      The new macros introduced here to annotate the transaction states wait
      events have the same effect as the generic lockdep annotation macros.
      
      With the exception of the lockdep annotation for TRANS_STATE_COMMIT_START
      the transaction thread has to acquire the lockdep maps for the
      transaction states as reader after the lockdep map for num_writers is
      released so that lockdep does not complain.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NIoannis Angelakopoulos <iangelak@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      3e738c53
    • I
      btrfs: add lockdep annotations for num_extwriters wait event · 5a9ba670
      Ioannis Angelakopoulos 提交于
      Similarly to the num_writers wait event in fs/btrfs/transaction.c add a
      lockdep annotation for the num_extwriters wait event.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NIoannis Angelakopoulos <iangelak@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      5a9ba670
    • I
      btrfs: add lockdep annotations for num_writers wait event · e1489b4f
      Ioannis Angelakopoulos 提交于
      Annotate the num_writers wait event in fs/btrfs/transaction.c with
      lockdep in order to catch deadlocks involving this wait event.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NIoannis Angelakopoulos <iangelak@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e1489b4f
    • I
      btrfs: add macros for annotating wait events with lockdep · ab9a323f
      Ioannis Angelakopoulos 提交于
      Introduce four macros that are used to annotate wait events in btrfs code
      with lockdep;
      
        1) the btrfs_lockdep_init_map
        2) the btrfs_lockdep_acquire,
        3) the btrfs_lockdep_release
        4) the btrfs_might_wait_for_event macros.
      
      The btrfs_lockdep_init_map macro is used to initialize a lockdep map.
      
      The btrfs_lockdep_<acquire,release> macros are used by threads to take
      the lockdep map as readers (shared lock) and release it, respectively.
      
      The btrfs_might_wait_for_event macro is used by threads to take the
      lockdep map as writers (exclusive lock) and release it.
      
      In general, the lockdep annotation for wait events work as follows:
      
      The condition for a wait event can be modified and signaled at the same
      time by multiple threads. These threads hold the lockdep map as readers
      when they enter a context in which blocking would prevent signaling the
      condition. Frequently, this occurs when a thread violates a condition
      (lockdep map acquire), before restoring it and signaling it at a later
      point (lockdep map release).
      
      The threads that block on the wait event take the lockdep map as writers
      (exclusive lock). These threads have to block until all the threads that
      hold the lockdep map as readers signal the condition for the wait event
      and release the lockdep map.
      
      The lockdep annotation is used to warn about potential deadlock scenarios
      that involve the threads that modify and signal the wait event condition
      and threads that block on the wait event. A simple example is illustrated
      below:
      
      Without lockdep:
      
      TA                                        TB
      cond = false
                                                lock(A)
                                                wait_event(w, cond)
                                                unlock(A)
      lock(A)
      cond = true
      signal(w)
      unlock(A)
      
      With lockdep:
      
      TA                                        TB
      rwsem_acquire_read(lockdep_map)
      cond = false
                                                lock(A)
                                                rwsem_acquire(lockdep_map)
                                                rwsem_release(lockdep_map)
                                                wait_event(w, cond)
                                                unlock(A)
      lock(A)
      cond = true
      signal(w)
      unlock(A)
      rwsem_release(lockdep_map)
      
      In the second case, with the lockdep annotation, lockdep would warn about
      an ABBA deadlock, while the first case would just deadlock at some point.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NIoannis Angelakopoulos <iangelak@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ab9a323f
    • Q
      btrfs: dump extra info if one free space cache has more bitmaps than it should · 62cd9d44
      Qu Wenruo 提交于
      There is an internal report on hitting the following ASSERT() in
      recalculate_thresholds():
      
       	ASSERT(ctl->total_bitmaps <= max_bitmaps);
      
      Above @max_bitmaps is calculated using the following variables:
      
      - bytes_per_bg
        8 * 4096 * 4096 (128M) for x86_64/x86.
      
      - block_group->length
        The length of the block group.
      
      @max_bitmaps is the rounded up value of block_group->length / 128M.
      
      Normally one free space cache should not have more bitmaps than above
      value, but when it happens the ASSERT() can be triggered if
      CONFIG_BTRFS_ASSERT is also enabled.
      
      But the ASSERT() itself won't provide enough info to know which is going
      wrong.
      Is the bg too small thus it only allows one bitmap?
      Or is there something else wrong?
      
      So although I haven't found extra reports or crash dump to do further
      investigation, add the extra info to make it more helpful to debug.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      62cd9d44
  2. 22 9月, 2022 7 次提交
    • T
      ext4: limit the number of retries after discarding preallocations blocks · 80fa46d6
      Theodore Ts'o 提交于
      This patch avoids threads live-locking for hours when a large number
      threads are competing over the last few free extents as they blocks
      getting added and removed from preallocation pools.  From our bug
      reporter:
      
         A reliable way for triggering this has multiple writers
         continuously write() to files when the filesystem is full, while
         small amounts of space are freed (e.g. by truncating a large file
         -1MiB at a time). In the local filesystem, this can be done by
         simply not checking the return code of write (0) and/or the error
         (ENOSPACE) that is set. Over NFS with an async mount, even clients
         with proper error checking will behave this way since the linux NFS
         client implementation will not propagate the server errors [the
         write syscalls immediately return success] until the file handle is
         closed. This leads to a situation where NFS clients send a
         continuous stream of WRITE rpcs which result in ERRNOSPACE -- but
         since the client isn't seeing this, the stream of writes continues
         at maximum network speed.
      
         When some space does appear, multiple writers will all attempt to
         claim it for their current write. For NFS, we may see dozens to
         hundreds of threads that do this.
      
         The real-world scenario of this is database backup tooling (in
         particular, github.com/mdkent/percona-xtrabackup) which may write
         large files (>1TiB) to NFS for safe keeping. Some temporary files
         are written, rewound, and read back -- all before closing the file
         handle (the temp file is actually unlinked, to trigger automatic
         deletion on close/crash.) An application like this operating on an
         async NFS mount will not see an error code until TiB have been
         written/read.
      
         The lockup was observed when running this database backup on large
         filesystems (64 TiB in this case) with a high number of block
         groups and no free space. Fragmentation is generally not a factor
         in this filesystem (~thousands of large files, mostly contiguous
         except for the parts written while the filesystem is at capacity.)
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Cc: stable@kernel.org
      80fa46d6
    • L
      ext4: fix bug in extents parsing when eh_entries == 0 and eh_depth > 0 · 29a5b8a1
      Luís Henriques 提交于
      When walking through an inode extents, the ext4_ext_binsearch_idx() function
      assumes that the extent header has been previously validated.  However, there
      are no checks that verify that the number of entries (eh->eh_entries) is
      non-zero when depth is > 0.  And this will lead to problems because the
      EXT_FIRST_INDEX() and EXT_LAST_INDEX() will return garbage and result in this:
      
      [  135.245946] ------------[ cut here ]------------
      [  135.247579] kernel BUG at fs/ext4/extents.c:2258!
      [  135.249045] invalid opcode: 0000 [#1] PREEMPT SMP
      [  135.250320] CPU: 2 PID: 238 Comm: tmp118 Not tainted 5.19.0-rc8+ #4
      [  135.252067] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.15.0-0-g2dd4b9b-rebuilt.opensuse.org 04/01/2014
      [  135.255065] RIP: 0010:ext4_ext_map_blocks+0xc20/0xcb0
      [  135.256475] Code:
      [  135.261433] RSP: 0018:ffffc900005939f8 EFLAGS: 00010246
      [  135.262847] RAX: 0000000000000024 RBX: ffffc90000593b70 RCX: 0000000000000023
      [  135.264765] RDX: ffff8880038e5f10 RSI: 0000000000000003 RDI: ffff8880046e922c
      [  135.266670] RBP: ffff8880046e9348 R08: 0000000000000001 R09: ffff888002ca580c
      [  135.268576] R10: 0000000000002602 R11: 0000000000000000 R12: 0000000000000024
      [  135.270477] R13: 0000000000000000 R14: 0000000000000024 R15: 0000000000000000
      [  135.272394] FS:  00007fdabdc56740(0000) GS:ffff88807dd00000(0000) knlGS:0000000000000000
      [  135.274510] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  135.276075] CR2: 00007ffc26bd4f00 CR3: 0000000006261004 CR4: 0000000000170ea0
      [  135.277952] Call Trace:
      [  135.278635]  <TASK>
      [  135.279247]  ? preempt_count_add+0x6d/0xa0
      [  135.280358]  ? percpu_counter_add_batch+0x55/0xb0
      [  135.281612]  ? _raw_read_unlock+0x18/0x30
      [  135.282704]  ext4_map_blocks+0x294/0x5a0
      [  135.283745]  ? xa_load+0x6f/0xa0
      [  135.284562]  ext4_mpage_readpages+0x3d6/0x770
      [  135.285646]  read_pages+0x67/0x1d0
      [  135.286492]  ? folio_add_lru+0x51/0x80
      [  135.287441]  page_cache_ra_unbounded+0x124/0x170
      [  135.288510]  filemap_get_pages+0x23d/0x5a0
      [  135.289457]  ? path_openat+0xa72/0xdd0
      [  135.290332]  filemap_read+0xbf/0x300
      [  135.291158]  ? _raw_spin_lock_irqsave+0x17/0x40
      [  135.292192]  new_sync_read+0x103/0x170
      [  135.293014]  vfs_read+0x15d/0x180
      [  135.293745]  ksys_read+0xa1/0xe0
      [  135.294461]  do_syscall_64+0x3c/0x80
      [  135.295284]  entry_SYSCALL_64_after_hwframe+0x46/0xb0
      
      This patch simply adds an extra check in __ext4_ext_check(), verifying that
      eh_entries is not 0 when eh_depth is > 0.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=215941
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=216283
      Cc: Baokun Li <libaokun1@huawei.com>
      Cc: stable@kernel.org
      Signed-off-by: NLuís Henriques <lhenriques@suse.de>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NBaokun Li <libaokun1@huawei.com>
      Link: https://lore.kernel.org/r/20220822094235.2690-1-lhenriques@suse.deSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      29a5b8a1
    • J
      ext4: use buckets for cr 1 block scan instead of rbtree · 83e80a6e
      Jan Kara 提交于
      Using rbtree for sorting groups by average fragment size is relatively
      expensive (needs rbtree update on every block freeing or allocation) and
      leads to wide spreading of allocations because selection of block group
      is very sentitive both to changes in free space and amount of blocks
      allocated. Furthermore selecting group with the best matching average
      fragment size is not necessary anyway, even more so because the
      variability of fragment sizes within a group is likely large so average
      is not telling much. We just need a group with large enough average
      fragment size so that we have high probability of finding large enough
      free extent and we don't want average fragment size to be too big so
      that we are likely to find free extent only somewhat larger than what we
      need.
      
      So instead of maintaing rbtree of groups sorted by fragment size keep
      bins (lists) or groups where average fragment size is in the interval
      [2^i, 2^(i+1)). This structure requires less updates on block allocation
      / freeing, generally avoids chaotic spreading of allocations into block
      groups, and still is able to quickly (even faster that the rbtree)
      provide a block group which is likely to have a suitably sized free
      space extent.
      
      This patch reduces number of block groups used when untarring archive
      with medium sized files (size somewhat above 64k which is default
      mballoc limit for avoiding locality group preallocation) to about half
      and thus improves write speeds for eMMC flash significantly.
      
      Fixes: 196e402a ("ext4: improve cr 0 / cr 1 group scanning")
      CC: stable@kernel.org
      Reported-and-tested-by: NStefan Wahren <stefan.wahren@i2se.com>
      Tested-by: NOjaswin Mujoo <ojaswin@linux.ibm.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Reviewed-by: NRitesh Harjani (IBM) <ritesh.list@gmail.com>
      Link: https://lore.kernel.org/all/0d81a7c2-46b7-6010-62a4-3e6cfc1628d6@i2se.com/
      Link: https://lore.kernel.org/r/20220908092136.11770-5-jack@suse.czSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      83e80a6e
    • J
      ext4: use locality group preallocation for small closed files · a9f2a293
      Jan Kara 提交于
      Curently we don't use any preallocation when a file is already closed
      when allocating blocks (from writeback code when converting delayed
      allocation). However for small files, using locality group preallocation
      is actually desirable as that is not specific to a particular file.
      Rather it is a method to pack small files together to reduce
      fragmentation and for that the fact the file is closed is actually even
      stronger hint the file would benefit from packing. So change the logic
      to allow locality group preallocation in this case.
      
      Fixes: 196e402a ("ext4: improve cr 0 / cr 1 group scanning")
      CC: stable@kernel.org
      Reported-and-tested-by: NStefan Wahren <stefan.wahren@i2se.com>
      Tested-by: NOjaswin Mujoo <ojaswin@linux.ibm.com>
      Reviewed-by: NRitesh Harjani (IBM) <ritesh.list@gmail.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Link: https://lore.kernel.org/all/0d81a7c2-46b7-6010-62a4-3e6cfc1628d6@i2se.com/
      Link: https://lore.kernel.org/r/20220908092136.11770-4-jack@suse.czSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      a9f2a293
    • J
      ext4: make directory inode spreading reflect flexbg size · 613c5a85
      Jan Kara 提交于
      Currently the Orlov inode allocator searches for free inodes for a
      directory only in flex block groups with at most inodes_per_group/16
      more directory inodes than average per flex block group. However with
      growing size of flex block group this becomes unnecessarily strict.
      Scale allowed difference from average directory count per flex block
      group with flex block group size as we do with other metrics.
      Tested-by: NStefan Wahren <stefan.wahren@i2se.com>
      Tested-by: NOjaswin Mujoo <ojaswin@linux.ibm.com>
      Cc: stable@kernel.org
      Link: https://lore.kernel.org/all/0d81a7c2-46b7-6010-62a4-3e6cfc1628d6@i2se.com/Signed-off-by: NJan Kara <jack@suse.cz>
      Link: https://lore.kernel.org/r/20220908092136.11770-3-jack@suse.czSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      613c5a85
    • J
      ext4: avoid unnecessary spreading of allocations among groups · 1940265e
      Jan Kara 提交于
      mb_set_largest_free_order() updates lists containing groups with largest
      chunk of free space of given order. The way it updates it leads to
      always moving the group to the tail of the list. Thus allocations
      looking for free space of given order effectively end up cycling through
      all groups (and due to initialization in last to first order). This
      spreads allocations among block groups which reduces performance for
      rotating disks or low-end flash media. Change
      mb_set_largest_free_order() to only update lists if the order of the
      largest free chunk in the group changed.
      
      Fixes: 196e402a ("ext4: improve cr 0 / cr 1 group scanning")
      CC: stable@kernel.org
      Reported-and-tested-by: NStefan Wahren <stefan.wahren@i2se.com>
      Tested-by: NOjaswin Mujoo <ojaswin@linux.ibm.com>
      Reviewed-by: NRitesh Harjani (IBM) <ritesh.list@gmail.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Link: https://lore.kernel.org/all/0d81a7c2-46b7-6010-62a4-3e6cfc1628d6@i2se.com/
      Link: https://lore.kernel.org/r/20220908092136.11770-2-jack@suse.czSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      1940265e
    • J
      ext4: make mballoc try target group first even with mb_optimize_scan · 4fca50d4
      Jan Kara 提交于
      One of the side-effects of mb_optimize_scan was that the optimized
      functions to select next group to try were called even before we tried
      the goal group. As a result we no longer allocate files close to
      corresponding inodes as well as we don't try to expand currently
      allocated extent in the same group. This results in reaim regression
      with workfile.disk workload of upto 8% with many clients on my test
      machine:
      
                           baseline               mb_optimize_scan
      Hmean     disk-1       2114.16 (   0.00%)     2099.37 (  -0.70%)
      Hmean     disk-41     87794.43 (   0.00%)    83787.47 *  -4.56%*
      Hmean     disk-81    148170.73 (   0.00%)   135527.05 *  -8.53%*
      Hmean     disk-121   177506.11 (   0.00%)   166284.93 *  -6.32%*
      Hmean     disk-161   220951.51 (   0.00%)   207563.39 *  -6.06%*
      Hmean     disk-201   208722.74 (   0.00%)   203235.59 (  -2.63%)
      Hmean     disk-241   222051.60 (   0.00%)   217705.51 (  -1.96%)
      Hmean     disk-281   252244.17 (   0.00%)   241132.72 *  -4.41%*
      Hmean     disk-321   255844.84 (   0.00%)   245412.84 *  -4.08%*
      
      Also this is causing huge regression (time increased by a factor of 5 or
      so) when untarring archive with lots of small files on some eMMC storage
      cards.
      
      Fix the problem by making sure we try goal group first.
      
      Fixes: 196e402a ("ext4: improve cr 0 / cr 1 group scanning")
      CC: stable@kernel.org
      Reported-and-tested-by: NStefan Wahren <stefan.wahren@i2se.com>
      Tested-by: NOjaswin Mujoo <ojaswin@linux.ibm.com>
      Reviewed-by: NRitesh Harjani (IBM) <ritesh.list@gmail.com>
      Link: https://lore.kernel.org/all/20220727105123.ckwrhbilzrxqpt24@quack3/
      Link: https://lore.kernel.org/all/0d81a7c2-46b7-6010-62a4-3e6cfc1628d6@i2se.com/Signed-off-by: NJan Kara <jack@suse.cz>
      Link: https://lore.kernel.org/r/20220908092136.11770-1-jack@suse.czSigned-off-by: NTheodore Ts'o <tytso@mit.edu>
      4fca50d4
  3. 20 9月, 2022 1 次提交
    • T
      open: always initialize ownership fields · f52d74b1
      Tetsuo Handa 提交于
      Beginning of the merge window we introduced the vfs{g,u}id_t types in
      b27c82e1 ("attr: port attribute changes to new types") and changed
      various codepaths over including chown_common().
      
      During that change we forgot to account for the case were the passed
      ownership value is -1. In this case the ownership fields in struct iattr
      aren't initialized but we rely on them being initialized by the time we
      generate the ownership to pass down to the LSMs. All the major LSMs
      don't care about the ownership values at all. Only Tomoyo uses them and
      so it took a while for syzbot to unearth this issue.
      
      Fix this by initializing the ownership fields and do it within the
      retry_deleg block. While notify_change() doesn't alter the ownership
      fields currently we shouldn't rely on it.
      
      Since no kernel has been released with these changes this does not
      needed to be backported to any stable kernels.
      
      [Christian Brauner (Microsoft) <brauner@kernel.org>]
      * rewrote commit message
      * use INVALID_VFS{G,U}ID macros
      
      Fixes: b27c82e1 ("attr: port attribute changes to new types") # mainline only
      Reported-and-tested-by: syzbot+541e21dcc32c4046cba9@syzkaller.appspotmail.com
      Signed-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Reviewed-by: NSeth Forshee (DigitalOcean) <sforshee@kernel.org>
      Signed-off-by: NChristian Brauner (Microsoft) <brauner@kernel.org>
      f52d74b1
  4. 14 9月, 2022 5 次提交
  5. 13 9月, 2022 5 次提交
    • N
      btrfs: zoned: wait for extent buffer IOs before finishing a zone · 2dd7e7bc
      Naohiro Aota 提交于
      Before sending REQ_OP_ZONE_FINISH to a zone, we need to ensure that
      ongoing IOs already finished. Or, we will see a "Zone Is Full" error for
      the IOs, as the ZONE_FINISH command makes the zone full.
      
      We ensure that with btrfs_wait_block_group_reservations() and
      btrfs_wait_ordered_roots() for a data block group. And, for a metadata
      block group, the comparison of alloc_offset vs meta_write_pointer mostly
      ensures IOs for the allocated region already sent. However, there still
      can be a little time frame where the IOs are sent but not yet completed.
      
      Introduce wait_eb_writebacks() to ensure such IOs are completed for a
      metadata block group. It walks the buffer_radix to find extent buffers in
      the block group and calls wait_on_extent_buffer_writeback() on them.
      
      Fixes: afba2bc0 ("btrfs: zoned: implement active zone tracking")
      CC: stable@vger.kernel.org # 5.19+
      Signed-off-by: NNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2dd7e7bc
    • F
      btrfs: fix hang during unmount when stopping a space reclaim worker · a362bb86
      Filipe Manana 提交于
      Often when running generic/562 from fstests we can hang during unmount,
      resulting in a trace like this:
      
        Sep 07 11:52:00 debian9 unknown: run fstests generic/562 at 2022-09-07 11:52:00
        Sep 07 11:55:32 debian9 kernel: INFO: task umount:49438 blocked for more than 120 seconds.
        Sep 07 11:55:32 debian9 kernel:       Not tainted 6.0.0-rc2-btrfs-next-122 #1
        Sep 07 11:55:32 debian9 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
        Sep 07 11:55:32 debian9 kernel: task:umount          state:D stack:    0 pid:49438 ppid: 25683 flags:0x00004000
        Sep 07 11:55:32 debian9 kernel: Call Trace:
        Sep 07 11:55:32 debian9 kernel:  <TASK>
        Sep 07 11:55:32 debian9 kernel:  __schedule+0x3c8/0xec0
        Sep 07 11:55:32 debian9 kernel:  ? rcu_read_lock_sched_held+0x12/0x70
        Sep 07 11:55:32 debian9 kernel:  schedule+0x5d/0xf0
        Sep 07 11:55:32 debian9 kernel:  schedule_timeout+0xf1/0x130
        Sep 07 11:55:32 debian9 kernel:  ? lock_release+0x224/0x4a0
        Sep 07 11:55:32 debian9 kernel:  ? lock_acquired+0x1a0/0x420
        Sep 07 11:55:32 debian9 kernel:  ? trace_hardirqs_on+0x2c/0xd0
        Sep 07 11:55:32 debian9 kernel:  __wait_for_common+0xac/0x200
        Sep 07 11:55:32 debian9 kernel:  ? usleep_range_state+0xb0/0xb0
        Sep 07 11:55:32 debian9 kernel:  __flush_work+0x26d/0x530
        Sep 07 11:55:32 debian9 kernel:  ? flush_workqueue_prep_pwqs+0x140/0x140
        Sep 07 11:55:32 debian9 kernel:  ? trace_clock_local+0xc/0x30
        Sep 07 11:55:32 debian9 kernel:  __cancel_work_timer+0x11f/0x1b0
        Sep 07 11:55:32 debian9 kernel:  ? close_ctree+0x12b/0x5b3 [btrfs]
        Sep 07 11:55:32 debian9 kernel:  ? __trace_bputs+0x10b/0x170
        Sep 07 11:55:32 debian9 kernel:  close_ctree+0x152/0x5b3 [btrfs]
        Sep 07 11:55:32 debian9 kernel:  ? evict_inodes+0x166/0x1c0
        Sep 07 11:55:32 debian9 kernel:  generic_shutdown_super+0x71/0x120
        Sep 07 11:55:32 debian9 kernel:  kill_anon_super+0x14/0x30
        Sep 07 11:55:32 debian9 kernel:  btrfs_kill_super+0x12/0x20 [btrfs]
        Sep 07 11:55:32 debian9 kernel:  deactivate_locked_super+0x2e/0xa0
        Sep 07 11:55:32 debian9 kernel:  cleanup_mnt+0x100/0x160
        Sep 07 11:55:32 debian9 kernel:  task_work_run+0x59/0xa0
        Sep 07 11:55:32 debian9 kernel:  exit_to_user_mode_prepare+0x1a6/0x1b0
        Sep 07 11:55:32 debian9 kernel:  syscall_exit_to_user_mode+0x16/0x40
        Sep 07 11:55:32 debian9 kernel:  do_syscall_64+0x48/0x90
        Sep 07 11:55:32 debian9 kernel:  entry_SYSCALL_64_after_hwframe+0x63/0xcd
        Sep 07 11:55:32 debian9 kernel: RIP: 0033:0x7fcde59a57a7
        Sep 07 11:55:32 debian9 kernel: RSP: 002b:00007ffe914217c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
        Sep 07 11:55:32 debian9 kernel: RAX: 0000000000000000 RBX: 00007fcde5ae8264 RCX: 00007fcde59a57a7
        Sep 07 11:55:32 debian9 kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000055b57556cdd0
        Sep 07 11:55:32 debian9 kernel: RBP: 000055b57556cba0 R08: 0000000000000000 R09: 00007ffe91420570
        Sep 07 11:55:32 debian9 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
        Sep 07 11:55:32 debian9 kernel: R13: 000055b57556cdd0 R14: 000055b57556ccb8 R15: 0000000000000000
        Sep 07 11:55:32 debian9 kernel:  </TASK>
      
      What happens is the following:
      
      1) The cleaner kthread tries to start a transaction to delete an unused
         block group, but the metadata reservation can not be satisfied right
         away, so a reservation ticket is created and it starts the async
         metadata reclaim task (fs_info->async_reclaim_work);
      
      2) Writeback for all the filler inodes with an i_size of 2K starts
         (generic/562 creates a lot of 2K files with the goal of filling
         metadata space). We try to create an inline extent for them, but we
         fail when trying to insert the inline extent with -ENOSPC (at
         cow_file_range_inline()) - since this is not critical, we fallback
         to non-inline mode (back to cow_file_range()), reserve extents, create
         extent maps and create the ordered extents;
      
      3) An unmount starts, enters close_ctree();
      
      4) The async reclaim task is flushing stuff, entering the flush states one
         by one, until it reaches RUN_DELAYED_IPUTS. There it runs all current
         delayed iputs.
      
         After running the delayed iputs and before calling
         btrfs_wait_on_delayed_iputs(), one or more ordered extents complete,
         and btrfs_add_delayed_iput() is called for each one through
         btrfs_finish_ordered_io() -> btrfs_put_ordered_extent(). This results
         in bumping fs_info->nr_delayed_iputs from 0 to some positive value.
      
         So the async reclaim task blocks at btrfs_wait_on_delayed_iputs() waiting
         for fs_info->nr_delayed_iputs to become 0;
      
      5) The current transaction is committed by the transaction kthread, we then
         start unpinning extents and end up calling btrfs_try_granting_tickets()
         through unpin_extent_range(), since we released some space.
         This results in satisfying the ticket created by the cleaner kthread at
         step 1, waking up the cleaner kthread;
      
      6) At close_ctree() we ask the cleaner kthread to park;
      
      7) The cleaner kthread starts the transaction, deletes the unused block
         group, and then calls kthread_should_park(), which returns true, so it
         parks. And at this point we have the delayed iputs added by the
         completion of the ordered extents still pending;
      
      8) Then later at close_ctree(), when we call:
      
             cancel_work_sync(&fs_info->async_reclaim_work);
      
         We hang forever, since the cleaner was parked and no one else can run
         delayed iputs after that, while the reclaim task is waiting for the
         remaining delayed iputs to be completed.
      
      Fix this by waiting for all ordered extents to complete and running the
      delayed iputs before attempting to stop the async reclaim tasks. Note that
      we can not wait for ordered extents with btrfs_wait_ordered_roots() (or
      other similar functions) because that waits for the BTRFS_ORDERED_COMPLETE
      flag to be set on an ordered extent, but the delayed iput is added after
      that, when doing the final btrfs_put_ordered_extent(). So instead wait for
      the work queues used for executing ordered extent completion to be empty,
      which works because we do the final put on an ordered extent at
      btrfs_finish_ordered_io() (while we are in the unmount context).
      
      Fixes: d6fd0ae2 ("Btrfs: fix missing delayed iputs on unmount")
      CC: stable@vger.kernel.org # 5.15+
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a362bb86
    • F
      btrfs: fix hang during unmount when stopping block group reclaim worker · 8a1f1e3d
      Filipe Manana 提交于
      During early unmount, at close_ctree(), we try to stop the block group
      reclaim task with cancel_work_sync(), but that may hang if the block group
      reclaim task is currently at btrfs_relocate_block_group() waiting for the
      flag BTRFS_FS_UNFINISHED_DROPS to be cleared from fs_info->flags. During
      unmount we only clear that flag later, after trying to stop the block
      group reclaim task.
      
      Fix that by clearing BTRFS_FS_UNFINISHED_DROPS before trying to stop the
      block group reclaim task and after setting BTRFS_FS_CLOSING_START, so that
      if the reclaim task is waiting on that bit, it will stop immediately after
      being woken, because it sees the filesystem is closing (with a call to
      btrfs_fs_closing()), and then returns immediately with -EINTR.
      
      Fixes: 31e70e52 ("btrfs: fix hang during unmount when block group reclaim task is running")
      CC: stable@vger.kernel.org # 5.15+
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      8a1f1e3d
    • A
      nfsd_splice_actor(): handle compound pages · bfbfb618
      Al Viro 提交于
      pipe_buffer might refer to a compound page (and contain more than a PAGE_SIZE
      worth of data).  Theoretically it had been possible since way back, but
      nfsd_splice_actor() hadn't run into that until copy_page_to_iter() change.
      Fortunately, the only thing that changes for compound pages is that we
      need to stuff each relevant subpage in and convert the offset into offset
      in the first subpage.
      Acked-by: NChuck Lever <chuck.lever@oracle.com>
      Tested-by: NBenjamin Coddington <bcodding@redhat.com>
      Fixes: f0f6b614 "copy_page_to_iter(): don't split high-order page in case of ITER_PIPE"
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      bfbfb618
    • R
      cifs: revalidate mapping when doing direct writes · 7500a992
      Ronnie Sahlberg 提交于
      Kernel bugzilla: 216301
      
      When doing direct writes we need to also invalidate the mapping in case
      we have a cached copy of the affected page(s) in memory or else
      subsequent reads of the data might return the old/stale content
      before we wrote an update to the server.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: NPaulo Alcantara (SUSE) <pc@cjr.nz>
      Signed-off-by: NRonnie Sahlberg <lsahlber@redhat.com>
      Signed-off-by: NSteve French <stfrench@microsoft.com>
      7500a992
  6. 09 9月, 2022 2 次提交
    • N
      NFSD: fix regression with setting ACLs. · 00801cd9
      NeilBrown 提交于
      A recent patch moved ACL setting into nfsd_setattr().
      Unfortunately it didn't work as nfsd_setattr() aborts early if
      iap->ia_valid is 0.
      
      Remove this test, and instead avoid calling notify_change() when
      ia_valid is 0.
      
      This means that nfsd_setattr() will now *always* lock the inode.
      Previously it didn't if only a ATTR_MODE change was requested on a
      symlink (see Commit 15b7a1b8 ("[PATCH] knfsd: fix setattr-on-symlink
      error return")). I don't think this change really matters.
      
      Fixes: c0cbe707 ("NFSD: add posix ACLs to struct nfsd_attrs")
      Signed-off-by: NNeilBrown <neilb@suse.de>
      Reviewed-by: NJeff Layton <jlayton@kernel.org>
      Signed-off-by: NChuck Lever <chuck.lever@oracle.com>
      00801cd9
    • B
      tracefs: Only clobber mode/uid/gid on remount if asked · 47311db8
      Brian Norris 提交于
      Users may have explicitly configured their tracefs permissions; we
      shouldn't overwrite those just because a second mount appeared.
      
      Only clobber if the options were provided at mount time.
      
      Note: the previous behavior was especially surprising in the presence of
      automounted /sys/kernel/debug/tracing/.
      
      Existing behavior:
      
        ## Pre-existing status: tracefs is 0755.
        # stat -c '%A' /sys/kernel/tracing/
        drwxr-xr-x
      
        ## (Re)trigger the automount.
        # umount /sys/kernel/debug/tracing
        # stat -c '%A' /sys/kernel/debug/tracing/.
        drwx------
      
        ## Unexpected: the automount changed mode for other mount instances.
        # stat -c '%A' /sys/kernel/tracing/
        drwx------
      
      New behavior (after this change):
      
        ## Pre-existing status: tracefs is 0755.
        # stat -c '%A' /sys/kernel/tracing/
        drwxr-xr-x
      
        ## (Re)trigger the automount.
        # umount /sys/kernel/debug/tracing
        # stat -c '%A' /sys/kernel/debug/tracing/.
        drwxr-xr-x
      
        ## Expected: the automount does not change other mount instances.
        # stat -c '%A' /sys/kernel/tracing/
        drwxr-xr-x
      
      Link: https://lkml.kernel.org/r/20220826174353.2.Iab6e5ea57963d6deca5311b27fb7226790d44406@changeid
      
      Cc: stable@vger.kernel.org
      Fixes: 4282d606 ("tracefs: Add new tracefs file system")
      Signed-off-by: NBrian Norris <briannorris@chromium.org>
      Signed-off-by: NSteven Rostedt (Google) <rostedt@goodmis.org>
      47311db8
  7. 08 9月, 2022 1 次提交
  8. 07 9月, 2022 1 次提交