1. 26 9月, 2022 5 次提交
    • I
      btrfs: add lockdep annotations for num_writers wait event · e1489b4f
      Ioannis Angelakopoulos 提交于
      Annotate the num_writers wait event in fs/btrfs/transaction.c with
      lockdep in order to catch deadlocks involving this wait event.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NIoannis Angelakopoulos <iangelak@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e1489b4f
    • I
      btrfs: add macros for annotating wait events with lockdep · ab9a323f
      Ioannis Angelakopoulos 提交于
      Introduce four macros that are used to annotate wait events in btrfs code
      with lockdep;
      
        1) the btrfs_lockdep_init_map
        2) the btrfs_lockdep_acquire,
        3) the btrfs_lockdep_release
        4) the btrfs_might_wait_for_event macros.
      
      The btrfs_lockdep_init_map macro is used to initialize a lockdep map.
      
      The btrfs_lockdep_<acquire,release> macros are used by threads to take
      the lockdep map as readers (shared lock) and release it, respectively.
      
      The btrfs_might_wait_for_event macro is used by threads to take the
      lockdep map as writers (exclusive lock) and release it.
      
      In general, the lockdep annotation for wait events work as follows:
      
      The condition for a wait event can be modified and signaled at the same
      time by multiple threads. These threads hold the lockdep map as readers
      when they enter a context in which blocking would prevent signaling the
      condition. Frequently, this occurs when a thread violates a condition
      (lockdep map acquire), before restoring it and signaling it at a later
      point (lockdep map release).
      
      The threads that block on the wait event take the lockdep map as writers
      (exclusive lock). These threads have to block until all the threads that
      hold the lockdep map as readers signal the condition for the wait event
      and release the lockdep map.
      
      The lockdep annotation is used to warn about potential deadlock scenarios
      that involve the threads that modify and signal the wait event condition
      and threads that block on the wait event. A simple example is illustrated
      below:
      
      Without lockdep:
      
      TA                                        TB
      cond = false
                                                lock(A)
                                                wait_event(w, cond)
                                                unlock(A)
      lock(A)
      cond = true
      signal(w)
      unlock(A)
      
      With lockdep:
      
      TA                                        TB
      rwsem_acquire_read(lockdep_map)
      cond = false
                                                lock(A)
                                                rwsem_acquire(lockdep_map)
                                                rwsem_release(lockdep_map)
                                                wait_event(w, cond)
                                                unlock(A)
      lock(A)
      cond = true
      signal(w)
      unlock(A)
      rwsem_release(lockdep_map)
      
      In the second case, with the lockdep annotation, lockdep would warn about
      an ABBA deadlock, while the first case would just deadlock at some point.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NIoannis Angelakopoulos <iangelak@fb.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ab9a323f
    • Q
      btrfs: dump extra info if one free space cache has more bitmaps than it should · 62cd9d44
      Qu Wenruo 提交于
      There is an internal report on hitting the following ASSERT() in
      recalculate_thresholds():
      
       	ASSERT(ctl->total_bitmaps <= max_bitmaps);
      
      Above @max_bitmaps is calculated using the following variables:
      
      - bytes_per_bg
        8 * 4096 * 4096 (128M) for x86_64/x86.
      
      - block_group->length
        The length of the block group.
      
      @max_bitmaps is the rounded up value of block_group->length / 128M.
      
      Normally one free space cache should not have more bitmaps than above
      value, but when it happens the ASSERT() can be triggered if
      CONFIG_BTRFS_ASSERT is also enabled.
      
      But the ASSERT() itself won't provide enough info to know which is going
      wrong.
      Is the bg too small thus it only allows one bitmap?
      Or is there something else wrong?
      
      So although I haven't found extra reports or crash dump to do further
      investigation, add the extra info to make it more helpful to debug.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      62cd9d44
    • L
      Linux 6.0-rc7 · f76349cf
      Linus Torvalds 提交于
      f76349cf
    • L
      Merge tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4 · 5e049663
      Linus Torvalds 提交于
      Pull ext4 fixes from Ted Ts'o:
       "Regression and bug fixes:
      
         - Performance regression fix from 5.18 on a Rasberry Pi
      
         - Fix extent parsing bug which triggers a BUG_ON when a (corrupted)
           extent tree has has a non-root node when zero entries.
      
         - Fix a livelock where in the right (wrong) circumstances a large
           number of nfsd threads can try to write to a nearly full file
           system, and retry for hours(!)"
      
      * tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
        ext4: limit the number of retries after discarding preallocations blocks
        ext4: fix bug in extents parsing when eh_entries == 0 and eh_depth > 0
        ext4: use buckets for cr 1 block scan instead of rbtree
        ext4: use locality group preallocation for small closed files
        ext4: make directory inode spreading reflect flexbg size
        ext4: avoid unnecessary spreading of allocations among groups
        ext4: make mballoc try target group first even with mb_optimize_scan
      5e049663
  2. 25 9月, 2022 6 次提交
  3. 24 9月, 2022 17 次提交
  4. 23 9月, 2022 12 次提交