1. 07 6月, 2023 4 次提交
  2. 06 6月, 2023 9 次提交
  3. 05 6月, 2023 11 次提交
  4. 03 6月, 2023 16 次提交
    • O
      !903 backport block bugfix · 4ac8d141
      openeuler-ci-bot 提交于
      Merge Pull Request from: @zhangjialin11 
       
      This patch series fix block layer bug.
      3 patchs fix iocost bug. Other patchs fix raid10 and badblocks bug.
       
       
      Link:https://gitee.com/openeuler/kernel/pulls/903 
      
      Reviewed-by: Zheng Zengkai <zhengzengkai@huawei.com> 
      Signed-off-by: Zheng Zengkai <zhengzengkai@huawei.com> 
      4ac8d141
    • L
      md/raid10: fix incorrect done of recovery · b0ac58c9
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188535, https://gitee.com/openeuler/kernel/issues/I6O61Q
      CVE: NA
      
      --------------------------------
      
      Recovery will go to giveup and let chunks_skipped++ in raid10_sync_request
      if there are some bad_blocks, and it will return max_sector when
      chunks_skipped >= geo.raid_disks. Now, recovery fail and data is
      inconsistent but user think recovery is done, it is wrong.
      
      Fix it by set mirror's recovery_disabled and spare device shouln't be
      added to here.
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      b0ac58c9
    • L
      md/raid10: fix null-ptr-deref in raid10_sync_request · 2de30b8f
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188378, https://gitee.com/openeuler/kernel/issues/I6GGV7
      CVE: NA
      
      --------------------------------
      
      init_resync() init mempool and set conf->have_replacemnt at the begaining
      of sync, close_sync() free the mempool when sync is completed.
      
      After commit 7e83ccbe ("md/raid10: Allow skipping recovery when clean
      arrays are assembled"), recovery might skipped and init_resync() is called
      but close_sync() is not. null-ptr-deref occurs as below:
        1) creat a array, wait for resync to complete, mddev->recovery_cp is set
           to MaxSector.
        2) recovery is woken and it is skipped. conf->have_replacement is set to
           0 in init_resync(). close_sync() not called.
        3) some io errors and rdev A is set to WantReplacement.
        4) a new device is added and set to A's replacement.
        5) recovery is woken, A have replacement, but conf->have_replacemnt is
           0. r10bio->dev[i].repl_bio will not be alloced and null-ptr-deref
           occurs.
      
      Fix it by not init_resync() if recovery skipped.
      
      Fixes: 7e83ccbe md/raid10: Allow skipping recovery when clean arrays are assembled")
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      2de30b8f
    • L
      block/badblocks: fix badblocks loss when badblocks combine · e35a7762
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188569, https://gitee.com/openeuler/kernel/issues/I6ZG5B
      CVE: NA
      
      --------------------------------
      
      badblocks will loss if we set it as below:
      
        # echo 1 1 > bad_blocks
        # echo 3 1 > bad_blocks
        # echo 1 5 > bad_blocks
        # cat bad_blocks
          1 3
      
      we will combine badblocks if there is an intersection between p[lo] and
      p[hi] in badblocks_set(). The end of new badblocks is p[hi]'s end now. but
      p[lo] may cross p[hi] and new end should be the larger of p[lo] and p[hi].
        lo: |------------------------|
        hi:		|--------|
      
      Fixes: 9e0e252a ("badblocks: Add core badblock management code")
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      e35a7762
    • L
      block/badblocks: fix the bug of reverse order · f9a3eea0
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188569, https://gitee.com/openeuler/kernel/issues/I6ZG5B
      CVE: NA
      
      --------------------------------
      
      Order of badblocks will be reversed if we set a large area at once. 'hi'
      remains unchanged while adding continuous badblocks is wrong, the next
      setting is greater than 'hi', it should be added to the next position.
      Let 'hi' +1 each cycle.
      
        # echo 0 2048 > bad_blocks
        # cat bad_blocks
          1536 512
          1024 512
          512 512
          0 512
      
      Fixes: 9e0e252a ("badblocks: Add core badblock management code")
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      f9a3eea0
    • L
      md: fix unexpected changes of return value in rdev_set_badblocks · bebf3d97
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188569, https://gitee.com/openeuler/kernel/issues/I6XBZQ
      CVE: NA
      
      --------------------------------
      
      If we set any badblocks fail, we will remove this rdev(set it to Faulty
      or set recovery_disabled). Previous patch "md/raid10: fix io hung in
      md_wait_for_blocked_rdev()" check badblocks->changed instead of return
      value in rdev_set_badblocks(), but return value of this func also changed
      accordingly, which is not what we expected.
      
      Keep the return value consistent with before.
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      bebf3d97
    • L
      md/raid10: fix io hung in md_wait_for_blocked_rdev() · c23e1cd1
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188569, https://gitee.com/openeuler/kernel/issues/I6XBZQ
      CVE: NA
      
      --------------------------------
      
      If badblocks are merged but bb->count exceedded, badblocks_set() will
      return 1 and merged badblocks will become un-ack. rdev_set_badblocks()
      will not set sb_flags and wakeup mddev->thread, io wait in
      md_wait_for_blocked_rdev() will hung because BlockedBadBlocks may not be
      cleared.
      
      Fix it by checking badblocks->changed instead of return value. This flag
      is set when badblocks changes.
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      c23e1cd1
    • L
      block: Only set bb->changed when badblocks changes · 78cba163
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188569, https://gitee.com/openeuler/kernel/issues/I6XBZQ
      CVE: NA
      
      --------------------------------
      
      bb->changed and unacked_exist is set and badblocks_update_acked() is
      involked even if no badblocks changes in badblocks_set(). Only update
      them when badblocks changes.
      
      Fixes: 9e0e252a ("badblocks: Add core badblock management code")
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      78cba163
    • L
      md/raid10: fix incorrect counting of rdev->nr_pending · 7b3b8187
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188605, https://gitee.com/openeuler/kernel/issues/I6ZJ3T
      CVE: NA
      
      --------------------------------
      
      We get rdev from mirrors.replacement twice in raid10_write_request().
      If replacement changes between two reads, it will increase A->nr_pending
      and decrease B->nr_pending.
      
        T1 (write)	   T2 (remove)	    T3 (add)
                         raid10_remove_disk
      
        raid10_write_request
         rrdev = conf->mirrors[d].replacement; ->rdev A
         A nr_pending++
      
                          p->rdev = p->replacement; ->rdev A
                          p->replacement = NULL;
      
      				    //A it set to WantReplacement
                                          raid10_add_disk
      				     p->replacement = rdev; ->rdev B
      
         if blocked_rdev
          rdev = conf->mirrors[d].replacement; ->rdev B
          B nr_pending--
      
      We will record rdev in r10bio, and get rdev from r10bio to fix it.
      
      Fixes: 475b0321 ("md/raid10: writes should get directed to replacement as well as original.")
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      7b3b8187
    • L
      md/raid10: remove WANR_ON_ONCE in raid10_end_write_request · a3ebeed7
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188605, https://gitee.com/openeuler/kernel/issues/I6GOYF
      CVE: NA
      
      --------------------------------
      
      It might read mirror.redev first and then mirror->replacement because of
      memory reordering in raid10_end_write_request(), WARN_ON occurs if we
      remove disk at the same time.
      
        T1 remove			T2 io end
        raid10_remove_disk		raid10_end_write_request
         p->rdev = NULL
      				 read rdev -> NULL
         smp_mb
         p->replacement = NULL
      				 read replacement -> NULL
      
      It is meaningless to compare rdev with mirror->rdev after we get it from
      r10_bio in raid10_end_write_request(). Remove this WANR_ON_ONCE.
      
      Fixes: 2ecf5e6ecbfd ("md/raid10: fix uaf if replacement replaces rdev")
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      a3ebeed7
    • L
      md/raid10: fix uaf if replacement replaces rdev · af959500
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188377, https://gitee.com/openeuler/kernel/issues/I6GOYF
      CVE: NA
      
      --------------------------------
      
      After commit 4ca40c2c ("md/raid10: Allow replacement device to be
      replace old drive.") mirrors->replacement can replace rdev during
      replacement's io pending, and repl_bio will write rdev (see
      raid10_write_one_disk()). We will get wrong device by r10conf in
      raid10_end_write_request(). In which case, r10_bio->devs[slot].repl_bio
      will be put but not set to IO_MADE_GOOD, and it will be put again later in
      raid_end_bio_io(), uaf occurs.
      
      Fix it by using r10_bio to record rdev. Put the operations of io fail and
      no replacement together, so no need to change repl.
      
        ==================================================================
        BUG: KASAN: use-after-free in bio_flagged include/linux/bio.h:238 [inline]
        BUG: KASAN: use-after-free in bio_put+0x78/0x80 block/bio.c:650
        Read of size 2 at addr ffff888116524dd4 by task md0_raid10/2618
      
        CPU: 0 PID: 2618 Comm: md0_raid10 Not tainted 5.10.0+ #3
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
        sd 0:0:0:0: rejecting I/O to offline device
        Call Trace:
         __dump_stack lib/dump_stack.c:77 [inline]
         dump_stack+0x107/0x167 lib/dump_stack.c:118
         print_address_description.constprop.0+0x1c/0x270 mm/kasan/report.c:390
         __kasan_report mm/kasan/report.c:550 [inline]
         kasan_report.cold+0x22/0x3a mm/kasan/report.c:567
         bio_flagged include/linux/bio.h:238 [inline]
         bio_put+0x78/0x80 block/bio.c:650
         put_all_bios drivers/md/raid10.c:248 [inline]
         free_r10bio drivers/md/raid10.c:257 [inline]
         raid_end_bio_io+0x3b5/0x590 drivers/md/raid10.c:309
         handle_write_completed drivers/md/raid10.c:2699 [inline]
         raid10d+0x2f85/0x5af0 drivers/md/raid10.c:2759
         md_thread+0x444/0x4b0 drivers/md/md.c:7932
         kthread+0x38c/0x470 kernel/kthread.c:313
         ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:299
      
        Allocated by task 1400:
         kasan_save_stack+0x1b/0x40 mm/kasan/common.c:48
         kasan_set_track mm/kasan/common.c:56 [inline]
         set_alloc_info mm/kasan/common.c:498 [inline]
         __kasan_kmalloc.constprop.0+0xb5/0xe0 mm/kasan/common.c:530
         slab_post_alloc_hook mm/slab.h:512 [inline]
         slab_alloc_node mm/slub.c:2923 [inline]
         slab_alloc mm/slub.c:2931 [inline]
         kmem_cache_alloc+0x144/0x360 mm/slub.c:2936
         mempool_alloc+0x146/0x360 mm/mempool.c:391
         bio_alloc_bioset+0x375/0x610 block/bio.c:486
         bio_clone_fast+0x20/0x50 block/bio.c:711
         raid10_write_one_disk+0x166/0xd30 drivers/md/raid10.c:1240
         raid10_write_request+0x1600/0x2c90 drivers/md/raid10.c:1484
         __make_request drivers/md/raid10.c:1508 [inline]
         raid10_make_request+0x376/0x620 drivers/md/raid10.c:1537
         md_handle_request+0x699/0x970 drivers/md/md.c:451
         md_submit_bio+0x204/0x400 drivers/md/md.c:489
         __submit_bio block/blk-core.c:959 [inline]
         __submit_bio_noacct block/blk-core.c:1007 [inline]
         submit_bio_noacct+0x2e3/0xcf0 block/blk-core.c:1086
         submit_bio+0x1a0/0x3a0 block/blk-core.c:1146
         submit_bh_wbc+0x685/0x8e0 fs/buffer.c:3053
         ext4_commit_super+0x37e/0x6c0 fs/ext4/super.c:5696
         flush_stashed_error_work+0x28b/0x400 fs/ext4/super.c:791
         process_one_work+0x9a6/0x1590 kernel/workqueue.c:2280
         worker_thread+0x61d/0x1310 kernel/workqueue.c:2426
         kthread+0x38c/0x470 kernel/kthread.c:313
         ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:299
      
        Freed by task 2618:
         kasan_save_stack+0x1b/0x40 mm/kasan/common.c:48
         kasan_set_track+0x1c/0x30 mm/kasan/common.c:56
         kasan_set_free_info+0x20/0x40 mm/kasan/generic.c:361
         __kasan_slab_free+0x151/0x180 mm/kasan/common.c:482
         slab_free_hook mm/slub.c:1569 [inline]
         slab_free_freelist_hook+0xa9/0x180 mm/slub.c:1608
         slab_free mm/slub.c:3179 [inline]
         kmem_cache_free+0xcd/0x3d0 mm/slub.c:3196
         mempool_free+0xe3/0x3b0 mm/mempool.c:500
         bio_free+0xe2/0x140 block/bio.c:266
         bio_put+0x58/0x80 block/bio.c:651
         raid10_end_write_request+0x885/0xb60 drivers/md/raid10.c:516
         bio_endio+0x376/0x6a0 block/bio.c:1465
         req_bio_endio block/blk-core.c:289 [inline]
         blk_update_request+0x5f5/0xf40 block/blk-core.c:1525
         blk_mq_end_request+0x4c/0x510 block/blk-mq.c:654
         blk_flush_complete_seq+0x835/0xd80 block/blk-flush.c:204
         flush_end_io+0x7b7/0xb90 block/blk-flush.c:261
         __blk_mq_end_request+0x282/0x4c0 block/blk-mq.c:645
         scsi_end_request+0x3a8/0x850 drivers/scsi/scsi_lib.c:607
         scsi_io_completion+0x3f5/0x1320 drivers/scsi/scsi_lib.c:970
         scsi_softirq_done+0x11b/0x490 drivers/scsi/scsi_lib.c:1448
         blk_mq_complete_request block/blk-mq.c:788 [inline]
         blk_mq_complete_request+0x84/0xb0 block/blk-mq.c:785
         scsi_mq_done+0x155/0x360 drivers/scsi/scsi_lib.c:1603
         virtscsi_vq_done drivers/scsi/virtio_scsi.c:184 [inline]
         virtscsi_req_done+0x14c/0x220 drivers/scsi/virtio_scsi.c:199
         vring_interrupt drivers/virtio/virtio_ring.c:2061 [inline]
         vring_interrupt+0x27a/0x300 drivers/virtio/virtio_ring.c:2047
         __handle_irq_event_percpu+0x2f8/0x830 kernel/irq/handle.c:156
         handle_irq_event_percpu kernel/irq/handle.c:196 [inline]
         handle_irq_event+0x105/0x280 kernel/irq/handle.c:213
         handle_edge_irq+0x258/0xd20 kernel/irq/chip.c:828
         asm_call_irq_on_stack+0xf/0x20
         __run_irq_on_irqstack arch/x86/include/asm/irq_stack.h:48 [inline]
         run_irq_on_irqstack_cond arch/x86/include/asm/irq_stack.h:101 [inline]
         handle_irq arch/x86/kernel/irq.c:230 [inline]
         __common_interrupt arch/x86/kernel/irq.c:249 [inline]
         common_interrupt+0xe2/0x190 arch/x86/kernel/irq.c:239
         asm_common_interrupt+0x1e/0x40 arch/x86/include/asm/idtentry.h:626
      
      Fixes: 4ca40c2c ("md/raid10: Allow replacement device to be replace old drive.")
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      af959500
    • L
      md/raid10: fix null-ptr-deref of mreplace in raid10_sync_request · 7718714e
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188527, https://gitee.com/openeuler/kernel/issues/I6O3HO
      CVE: NA
      
      --------------------------------
      
      need_replace will be set to 1 if no-Faulty mreplace exists, and mreplace
      will be deref later. However, the latter check of mreplace might set
      mreplace to NULL, null-ptr-deref occurs if need_replace is 1 at this time.
      
      Fix it by merging two checks into one.
      
      Fixes: ee37d731 ("md/raid10: Fix raid10 replace hang when new added disk faulty")
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      7718714e
    • L
      md/raid10: fix io loss while replacement replace rdev · e8025850
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188787, https://gitee.com/openeuler/kernel/issues/I78YIW
      CVE: NA
      
      --------------------------------
      
      When we remove a disk which has replacement, first set rdev to NULL
      and then set replacement to rdev, finally set replacement to NULL (see
      raid10_remove_disk()). If io is submitted during the same time, it might
      read both rdev and replacement as NULL, and io will not be submitted.
      
        rdev -> NULL
                              read rdev
        replacement -> NULL
                              read replacement
      
      Fix it by reading replacement first and rdev later, meanwhile, use smp_mb()
      to prevent memory reordering.
      
      Fixes: 475b0321 ("md/raid10: writes should get directed to replacement as well as original.")
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      e8025850
    • L
      md/raid10: prioritize adding disk to 'removed' mirror · 2e2e7ab6
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188804, https://gitee.com/openeuler/kernel/issues/I78YIS
      CVE: NA
      
      --------------------------------
      
      When add a new disk to raid10, it will traverse conf->mirror from start
      and find one of the following mirror:
        1. mirror->rdev is set to WantReplacement and it have no replacement,
           set new disk to mirror->replacement.
        2. no rdev, set new disk to mirror->rdev.
      
      There is a array as below (sda is set to WantReplacement):
      
          Number   Major   Minor   RaidDevice State
             0       8        0        0      active sync set-A   /dev/sda
             -       0        0        1      removed
             2       8       32        2      active sync set-A   /dev/sdc
             3       8       48        3      active sync set-B   /dev/sdd
      
      Use 'mdadm --add' to add a new disk to this array, the new disk will
      become sda's replacement instead of add to removed position, which is
      confusing for users. Meanwhile, after new disk recovery success, sda
      will be set to Faulty.
      
      Prioritize adding disk to 'removed' mirror is a better choice. In the
      above scenario, the behavior is the same as before, except sda will not
      be deleted. Before other disks are added, continued use sda is more
      reliable.
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      2e2e7ab6
    • L
      md: fix io loss when remove rdev fail · 894f89fa
      Li Nan 提交于
      hulk inclusion
      category: bugfix, https://gitee.com/openeuler/kernel/issues/I71EKW
      bugzilla: 188628
      CVE: NA
      
      --------------------------------
      
      We first set rdev to WantRemove, and check if there is any io
      pending, if so, we will clear flag and return busy in
      raid10_remove_disk(). io will loss as below:
      
        raid10_remove_disk
         set WantRemove
      			write rdev
      			 if WantRemove
      			  do not submit io
         if rdev->nr_pending
          clear WantRemove
          return BUSY
      					read rdev
      					 get error data
      
      Fix it by md_error the rdev which io pending while removing. When the code
      reaches this point, it means this rdev will be removed later, so setting
      it as faulty has little impact.
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      894f89fa
    • L
      md/raid10: fix a race between removing rdev and access conf->mirrors[i].rdev · 4461a62e
      Li Nan 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 188533, https://gitee.com/openeuler/kernel/issues/I6O7YB
      CVE: NA
      
      --------------------------------
      
      commit ceff49d9 ("md/raid1: fix a race between removing rdev and
      access conf->mirrors[i].rdev") fix a null-ptr-deref about raid1. There
      is same bug in raid10 and fix it in the same way.
      
      There is no sync_thread running while removing rdev, no need to check
      the flag in raid10_sync_request().
      Signed-off-by: NLi Nan <linan122@huawei.com>
      Reviewed-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      4461a62e