1. 29 3月, 2023 1 次提交
  2. 26 11月, 2022 1 次提交
  3. 01 11月, 2022 1 次提交
  4. 06 9月, 2022 1 次提交
    • M
      raid5: introduce MD_BROKEN · 964c44cc
      Mariusz Tkaczyk 提交于
      stable inclusion
      from stable-v5.10.120
      commit 0f03885059c1f2a5fb690d21578d0cad55a98b1f
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I5L6BR
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=0f03885059c1f2a5fb690d21578d0cad55a98b1f
      
      --------------------------------
      
      commit 57668f0a upstream.
      
      Raid456 module had allowed to achieve failed state. It was fixed by
      fb73b357 ("raid5: block failing device if raid will be failed").
      This fix introduces a bug, now if raid5 fails during IO, it may result
      with a hung task without completion. Faulty flag on the device is
      necessary to process all requests and is checked many times, mainly in
      analyze_stripe().
      Allow to set faulty on drive again and set MD_BROKEN if raid is failed.
      
      As a result, this level is allowed to achieve failed state again, but
      communication with userspace (via -EBUSY status) will be preserved.
      
      This restores possibility to fail array via #mdadm --set-faulty command
      and will be fixed by additional verification on mdadm side.
      
      Reproduction steps:
       mdadm -CR imsm -e imsm -n 3 /dev/nvme[0-2]n1
       mdadm -CR r5 -e imsm -l5 -n3 /dev/nvme[0-2]n1 --assume-clean
       mkfs.xfs /dev/md126 -f
       mount /dev/md126 /mnt/root/
      
       fio --filename=/mnt/root/file --size=5GB --direct=1 --rw=randrw
      --bs=64k --ioengine=libaio --iodepth=64 --runtime=240 --numjobs=4
      --time_based --group_reporting --name=throughput-test-job
      --eta-newline=1 &
      
       echo 1 > /sys/block/nvme2n1/device/device/remove
       echo 1 > /sys/block/nvme1n1/device/device/remove
      
       [ 1475.787779] Call Trace:
       [ 1475.793111] __schedule+0x2a6/0x700
       [ 1475.799460] schedule+0x38/0xa0
       [ 1475.805454] raid5_get_active_stripe+0x469/0x5f0 [raid456]
       [ 1475.813856] ? finish_wait+0x80/0x80
       [ 1475.820332] raid5_make_request+0x180/0xb40 [raid456]
       [ 1475.828281] ? finish_wait+0x80/0x80
       [ 1475.834727] ? finish_wait+0x80/0x80
       [ 1475.841127] ? finish_wait+0x80/0x80
       [ 1475.847480] md_handle_request+0x119/0x190
       [ 1475.854390] md_make_request+0x8a/0x190
       [ 1475.861041] generic_make_request+0xcf/0x310
       [ 1475.868145] submit_bio+0x3c/0x160
       [ 1475.874355] iomap_dio_submit_bio.isra.20+0x51/0x60
       [ 1475.882070] iomap_dio_bio_actor+0x175/0x390
       [ 1475.889149] iomap_apply+0xff/0x310
       [ 1475.895447] ? iomap_dio_bio_actor+0x390/0x390
       [ 1475.902736] ? iomap_dio_bio_actor+0x390/0x390
       [ 1475.909974] iomap_dio_rw+0x2f2/0x490
       [ 1475.916415] ? iomap_dio_bio_actor+0x390/0x390
       [ 1475.923680] ? atime_needs_update+0x77/0xe0
       [ 1475.930674] ? xfs_file_dio_aio_read+0x6b/0xe0 [xfs]
       [ 1475.938455] xfs_file_dio_aio_read+0x6b/0xe0 [xfs]
       [ 1475.946084] xfs_file_read_iter+0xba/0xd0 [xfs]
       [ 1475.953403] aio_read+0xd5/0x180
       [ 1475.959395] ? _cond_resched+0x15/0x30
       [ 1475.965907] io_submit_one+0x20b/0x3c0
       [ 1475.972398] __x64_sys_io_submit+0xa2/0x180
       [ 1475.979335] ? do_io_getevents+0x7c/0xc0
       [ 1475.986009] do_syscall_64+0x5b/0x1a0
       [ 1475.992419] entry_SYSCALL_64_after_hwframe+0x65/0xca
       [ 1476.000255] RIP: 0033:0x7f11fc27978d
       [ 1476.006631] Code: Bad RIP value.
       [ 1476.073251] INFO: task fio:3877 blocked for more than 120 seconds.
      
      Cc: stable@vger.kernel.org
      Fixes: fb73b357 ("raid5: block failing device if raid will be failed")
      Reviewd-by: NXiao Ni <xni@redhat.com>
      Signed-off-by: NMariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
      Signed-off-by: NSong Liu <song@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      Acked-by: NXie XiuQi <xiexiuqi@huawei.com>
      964c44cc
  5. 02 8月, 2022 3 次提交
  6. 15 11月, 2021 1 次提交
  7. 09 10月, 2020 1 次提交
    • S
      md/raid5: fix oops during stripe resizing · b44c018c
      Song Liu 提交于
      KoWei reported crash during raid5 reshape:
      
      [ 1032.252932] Oops: 0002 [#1] SMP PTI
      [...]
      [ 1032.252943] RIP: 0010:memcpy_erms+0x6/0x10
      [...]
      [ 1032.252947] RSP: 0018:ffffba1ac0c03b78 EFLAGS: 00010286
      [ 1032.252949] RAX: 0000784ac0000000 RBX: ffff91bec3d09740 RCX: 0000000000001000
      [ 1032.252951] RDX: 0000000000001000 RSI: ffff91be6781c000 RDI: 0000784ac0000000
      [ 1032.252953] RBP: ffffba1ac0c03bd8 R08: 0000000000001000 R09: ffffba1ac0c03bf8
      [ 1032.252954] R10: 0000000000000000 R11: 0000000000000000 R12: ffffba1ac0c03bf8
      [ 1032.252955] R13: 0000000000001000 R14: 0000000000000000 R15: 0000000000000000
      [ 1032.252958] FS:  0000000000000000(0000) GS:ffff91becf500000(0000) knlGS:0000000000000000
      [ 1032.252959] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [ 1032.252961] CR2: 0000784ac0000000 CR3: 000000031780a002 CR4: 00000000001606e0
      [ 1032.252962] Call Trace:
      [ 1032.252969]  ? async_memcpy+0x179/0x1000 [async_memcpy]
      [ 1032.252977]  ? raid5_release_stripe+0x8e/0x110 [raid456]
      [ 1032.252982]  handle_stripe_expansion+0x15a/0x1f0 [raid456]
      [ 1032.252988]  handle_stripe+0x592/0x1270 [raid456]
      [ 1032.252993]  handle_active_stripes.isra.0+0x3cb/0x5a0 [raid456]
      [ 1032.252999]  raid5d+0x35c/0x550 [raid456]
      [ 1032.253002]  ? schedule+0x42/0xb0
      [ 1032.253006]  ? schedule_timeout+0x10e/0x160
      [ 1032.253011]  md_thread+0x97/0x160
      [ 1032.253015]  ? wait_woken+0x80/0x80
      [ 1032.253019]  kthread+0x104/0x140
      [ 1032.253022]  ? md_start_sync+0x60/0x60
      [ 1032.253024]  ? kthread_park+0x90/0x90
      [ 1032.253027]  ret_from_fork+0x35/0x40
      
      This is because cache_size_mutex was unlocked too early in resize_stripes,
      which races with grow_one_stripe() that grow_one_stripe() allocates a
      stripe with wrong pool_size.
      
      Fix this issue by unlocking cache_size_mutex after updating pool_size.
      
      Cc: <stable@vger.kernel.org> # v4.4+
      Reported-by: NKoWei Sung <winders@amazon.com>
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      b44c018c
  8. 25 9月, 2020 11 次提交
  9. 28 8月, 2020 1 次提交
    • Y
      md/raid5: make sure stripe_size as power of two · 6af10a33
      Yufen Yu 提交于
      Commit 3b5408b9 ("md/raid5: support config stripe_size by sysfs
      entry") make stripe_size as a configurable value. It just requires
      stripe_size as multiple of 4KB.
      
      In fact, we should make sure stripe_size as power of two. Otherwise,
      stripe_shift which is the result of ilog2 can not represent the real
      stripe_size. Then, stripe_hash() and stripe_hash_locks_hash() may
      get unexpected value.
      
      Fixes: 3b5408b9 ("md/raid5: support config stripe_size by sysfs entry")
      Signed-off-by: NYufen Yu <yuyufen@huawei.com>
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      6af10a33
  10. 24 8月, 2020 1 次提交
  11. 03 8月, 2020 4 次提交
  12. 29 7月, 2020 1 次提交
  13. 23 7月, 2020 1 次提交
  14. 22 7月, 2020 3 次提交
    • Y
      md/raid5: support config stripe_size by sysfs entry · 3b5408b9
      Yufen Yu 提交于
      Adding a new 'stripe_size' sysfs entry to set and show stripe_size.
      stripe_size should not be bigger than PAGE_SIZE, and it requires to
      be multiple of 4096. We can adjust stripe_size by writing value into
      sysfs entry, likely, set stripe_size as 16KB:
      
                echo 16384 > /sys/block/md1/md/stripe_size
      
      Show current stripe_size value:
      
                cat /sys/block/md1/md/stripe_size
      
      For PAGE_SIZE is equal to 4096, 'stripe_size' can just be read.
      Signed-off-by: NYufen Yu <yuyufen@huawei.com>
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      3b5408b9
    • Y
      md/raid5: set default stripe_size as 4096 · e2368582
      Yufen Yu 提交于
      In RAID5, if issued bio size is bigger than stripe_size, it will be
      split in the unit of stripe_size and process them one by one. Even
      for size less then stripe_size, RAID5 also request data from disk at
      least of stripe_size.
      
      Nowdays, stripe_size is equal to the value of PAGE_SIZE. Since filesystem
      usually issue bio in the unit of 4KB, there is no problem for PAGE_SIZE
      as 4KB. But, for 64KB PAGE_SIZE, bio from filesystem requests 4KB data
      while RAID5 issue IO at least stripe_size (64KB) each time. That will
      waste resource of disk bandwidth and compute xor.
      
      To avoding the waste, we want to make stripe_size configurable. This
      patch just set default stripe_size as 4096. User can also set the value
      bigger than 4KB for some special requirements, such as we know the
      issued io size is more than 4KB.
      
      To evaluate the new feature, we create raid5 device '/dev/md5' with
      4 SSD disk and test it on arm64 machine with 64KB PAGE_SIZE.
      
      1) We format /dev/md5 with mkfs.ext4 and mount ext4 with default
       configure on /mnt directory. Then, trying to test it by dbench with
       command: dbench -D /mnt -t 1000 10. Result show as:
      
       'stripe_size = 64KB'
      
        Operation      Count    AvgLat    MaxLat
        ----------------------------------------
        NTCreateX    9805011     0.021    64.728
        Close        7202525     0.001     0.120
        Rename        415213     0.051    44.681
        Unlink       1980066     0.079    93.147
        Deltree          240     1.793     6.516
        Mkdir            120     0.004     0.007
        Qpathinfo    8887512     0.007    37.114
        Qfileinfo    1557262     0.001     0.030
        Qfsinfo      1629582     0.012     0.152
        Sfileinfo     798756     0.040    57.641
        Find         3436004     0.019    57.782
        WriteX       4887239     0.021    57.638
        ReadX        15370483     0.005    37.818
        LockX          31934     0.003     0.022
        UnlockX        31933     0.001     0.021
        Flush         687205    13.302   530.088
      
       Throughput 307.799 MB/sec  10 clients  10 procs  max_latency=530.091 ms
       -------------------------------------------------------
      
       'stripe_size = 4KB'
      
        Operation      Count    AvgLat    MaxLat
        ----------------------------------------
        NTCreateX    11999166     0.021    36.380
        Close        8814128     0.001     0.122
        Rename        508113     0.051    29.169
        Unlink       2423242     0.070    38.141
        Deltree          300     1.885     7.155
        Mkdir            150     0.004     0.006
        Qpathinfo    10875921     0.007    35.485
        Qfileinfo    1905837     0.001     0.032
        Qfsinfo      1994304     0.012     0.125
        Sfileinfo     977450     0.029    26.489
        Find         4204952     0.019     9.361
        WriteX       5981890     0.019    27.804
        ReadX        18809742     0.004    33.491
        LockX          39074     0.003     0.025
        UnlockX        39074     0.001     0.014
        Flush         841022    10.712   458.848
      
       Throughput 376.777 MB/sec  10 clients  10 procs  max_latency=458.852 ms
       -------------------------------------------------------
      
       It show that setting stripe_size as 4KB has higher thoughput, i.e.
       (376.777 vs 307.799) and has smaller latency than that setting as 64KB.
      
       2) We try to evaluate IO throughput for /dev/md5 by fio with config:
      
       [4KB randwrite]
       direct=1
       numjob=2
       iodepth=64
       ioengine=libaio
       filename=/dev/md5
       bs=4KB
       rw=randwrite
      
       [64KB write]
       direct=1
       numjob=2
       iodepth=64
       ioengine=libaio
       filename=/dev/md5
       bs=1MB
       rw=write
      
       The result as follow:
      
                     +                   +
                     | stripe_size(64KB) | stripe_size(4KB)
       +----------------------------------------------------+
       4KB randwrite |     15MB/s        |      100MB/s
       +----------------------------------------------------+
       1MB write     |   1000MB/s        |      700MB/s
      
       The result show that when size of io is bigger than 4KB (64KB),
       64KB stripe_size has much higher IOPS. But for 4KB randwrite, that
       means, size of io issued to device are smaller, 4KB stripe_size
       have better performance.
      
      Normally, default value (4096) can get relatively good performance.
      But if each issued io is bigger than 4096, setting value more than
      4096 may get better performance.
      
      Here, we just set default stripe_size as 4096, and we will try to
      support setting different stripe_size by sysfs interface in the
      following patch.
      Signed-off-by: NYufen Yu <yuyufen@huawei.com>
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      e2368582
    • Y
      md/raid456: convert macro STRIPE_* to RAID5_STRIPE_* · c911c46c
      Yufen Yu 提交于
      Convert macro STRIPE_SIZE, STRIPE_SECTORS and STRIPE_SHIFT to
      RAID5_STRIPE_SIZE(), RAID5_STRIPE_SECTORS() and RAID5_STRIPE_SHIFT().
      
      This patch is prepare for the following adjustable stripe_size.
      It will not change any existing functionality.
      Signed-off-by: NYufen Yu <yuyufen@huawei.com>
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      c911c46c
  15. 17 7月, 2020 4 次提交
  16. 16 7月, 2020 1 次提交
  17. 15 7月, 2020 1 次提交
    • J
      md: fix deadlock causing by sysfs_notify · e1a86dbb
      Junxiao Bi 提交于
      The following deadlock was captured. The first process is holding 'kernfs_mutex'
      and hung by io. The io was staging in 'r1conf.pending_bio_list' of raid1 device,
      this pending bio list would be flushed by second process 'md127_raid1', but
      it was hung by 'kernfs_mutex'. Using sysfs_notify_dirent_safe() to replace
      sysfs_notify() can fix it. There were other sysfs_notify() invoked from io
      path, removed all of them.
      
       PID: 40430  TASK: ffff8ee9c8c65c40  CPU: 29  COMMAND: "probe_file"
        #0 [ffffb87c4df37260] __schedule at ffffffff9a8678ec
        #1 [ffffb87c4df372f8] schedule at ffffffff9a867f06
        #2 [ffffb87c4df37310] io_schedule at ffffffff9a0c73e6
        #3 [ffffb87c4df37328] __dta___xfs_iunpin_wait_3443 at ffffffffc03a4057 [xfs]
        #4 [ffffb87c4df373a0] xfs_iunpin_wait at ffffffffc03a6c79 [xfs]
        #5 [ffffb87c4df373b0] __dta_xfs_reclaim_inode_3357 at ffffffffc039a46c [xfs]
        #6 [ffffb87c4df37400] xfs_reclaim_inodes_ag at ffffffffc039a8b6 [xfs]
        #7 [ffffb87c4df37590] xfs_reclaim_inodes_nr at ffffffffc039bb33 [xfs]
        #8 [ffffb87c4df375b0] xfs_fs_free_cached_objects at ffffffffc03af0e9 [xfs]
        #9 [ffffb87c4df375c0] super_cache_scan at ffffffff9a287ec7
       #10 [ffffb87c4df37618] shrink_slab at ffffffff9a1efd93
       #11 [ffffb87c4df37700] shrink_node at ffffffff9a1f5968
       #12 [ffffb87c4df37788] do_try_to_free_pages at ffffffff9a1f5ea2
       #13 [ffffb87c4df377f0] try_to_free_mem_cgroup_pages at ffffffff9a1f6445
       #14 [ffffb87c4df37880] try_charge at ffffffff9a26cc5f
       #15 [ffffb87c4df37920] memcg_kmem_charge_memcg at ffffffff9a270f6a
       #16 [ffffb87c4df37958] new_slab at ffffffff9a251430
       #17 [ffffb87c4df379c0] ___slab_alloc at ffffffff9a251c85
       #18 [ffffb87c4df37a80] __slab_alloc at ffffffff9a25635d
       #19 [ffffb87c4df37ac0] kmem_cache_alloc at ffffffff9a251f89
       #20 [ffffb87c4df37b00] alloc_inode at ffffffff9a2a2b10
       #21 [ffffb87c4df37b20] iget_locked at ffffffff9a2a4854
       #22 [ffffb87c4df37b60] kernfs_get_inode at ffffffff9a311377
       #23 [ffffb87c4df37b80] kernfs_iop_lookup at ffffffff9a311e2b
       #24 [ffffb87c4df37ba8] lookup_slow at ffffffff9a290118
       #25 [ffffb87c4df37c10] walk_component at ffffffff9a291e83
       #26 [ffffb87c4df37c78] path_lookupat at ffffffff9a293619
       #27 [ffffb87c4df37cd8] filename_lookup at ffffffff9a2953af
       #28 [ffffb87c4df37de8] user_path_at_empty at ffffffff9a295566
       #29 [ffffb87c4df37e10] vfs_statx at ffffffff9a289787
       #30 [ffffb87c4df37e70] SYSC_newlstat at ffffffff9a289d5d
       #31 [ffffb87c4df37f18] sys_newlstat at ffffffff9a28a60e
       #32 [ffffb87c4df37f28] do_syscall_64 at ffffffff9a003949
       #33 [ffffb87c4df37f50] entry_SYSCALL_64_after_hwframe at ffffffff9aa001ad
           RIP: 00007f617a5f2905  RSP: 00007f607334f838  RFLAGS: 00000246
           RAX: ffffffffffffffda  RBX: 00007f6064044b20  RCX: 00007f617a5f2905
           RDX: 00007f6064044b20  RSI: 00007f6064044b20  RDI: 00007f6064005890
           RBP: 00007f6064044aa0   R8: 0000000000000030   R9: 000000000000011c
           R10: 0000000000000013  R11: 0000000000000246  R12: 00007f606417e6d0
           R13: 00007f6064044aa0  R14: 00007f6064044b10  R15: 00000000ffffffff
           ORIG_RAX: 0000000000000006  CS: 0033  SS: 002b
      
       PID: 927    TASK: ffff8f15ac5dbd80  CPU: 42  COMMAND: "md127_raid1"
        #0 [ffffb87c4df07b28] __schedule at ffffffff9a8678ec
        #1 [ffffb87c4df07bc0] schedule at ffffffff9a867f06
        #2 [ffffb87c4df07bd8] schedule_preempt_disabled at ffffffff9a86825e
        #3 [ffffb87c4df07be8] __mutex_lock at ffffffff9a869bcc
        #4 [ffffb87c4df07ca0] __mutex_lock_slowpath at ffffffff9a86a013
        #5 [ffffb87c4df07cb0] mutex_lock at ffffffff9a86a04f
        #6 [ffffb87c4df07cc8] kernfs_find_and_get_ns at ffffffff9a311d83
        #7 [ffffb87c4df07cf0] sysfs_notify at ffffffff9a314b3a
        #8 [ffffb87c4df07d18] md_update_sb at ffffffff9a688696
        #9 [ffffb87c4df07d98] md_update_sb at ffffffff9a6886d5
       #10 [ffffb87c4df07da8] md_check_recovery at ffffffff9a68ad9c
       #11 [ffffb87c4df07dd0] raid1d at ffffffffc01f0375 [raid1]
       #12 [ffffb87c4df07ea0] md_thread at ffffffff9a680348
       #13 [ffffb87c4df07f08] kthread at ffffffff9a0b8005
       #14 [ffffb87c4df07f50] ret_from_fork at ffffffff9aa00344
      Signed-off-by: NJunxiao Bi <junxiao.bi@oracle.com>
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      e1a86dbb
  18. 09 7月, 2020 1 次提交
  19. 01 7月, 2020 1 次提交
  20. 14 5月, 2020 1 次提交