1. 10 2月, 2021 5 次提交
    • K
      bcache: Move journal work to new flush wq · afe78ab4
      Kai Krakow 提交于
      This is potentially long running and not latency sensitive, let's get
      it out of the way of other latency sensitive events.
      
      As observed in the previous commit, the `system_wq` comes easily
      congested by bcache, and this fixes a few more stalls I was observing
      every once in a while.
      
      Let's not make this `WQ_MEM_RECLAIM` as it showed to reduce performance
      of boot and file system operations in my tests. Also, without
      `WQ_MEM_RECLAIM`, I no longer see desktop stalls. This matches the
      previous behavior as `system_wq` also does no memory reclaim:
      
      > // workqueue.c:
      > system_wq = alloc_workqueue("events", 0, 0);
      
      Cc: Coly Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.4+
      Signed-off-by: NKai Krakow <kai@kaishome.de>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      afe78ab4
    • K
      bcache: Give btree_io_wq correct semantics again · d797bd98
      Kai Krakow 提交于
      Before killing `btree_io_wq`, the queue was allocated using
      `create_singlethread_workqueue()` which has `WQ_MEM_RECLAIM`. After
      killing it, it no longer had this property but `system_wq` is not
      single threaded.
      
      Let's combine both worlds and make it multi threaded but able to
      reclaim memory.
      
      Cc: Coly Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.4+
      Signed-off-by: NKai Krakow <kai@kaishome.de>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d797bd98
    • K
      Revert "bcache: Kill btree_io_wq" · 9f233ffe
      Kai Krakow 提交于
      This reverts commit 56b30770.
      
      With the btree using the `system_wq`, I seem to see a lot more desktop
      latency than I should.
      
      After some more investigation, it looks like the original assumption
      of 56b30770 no longer is true, and bcache has a very high potential of
      congesting the `system_wq`. In turn, this introduces laggy desktop
      performance, IO stalls (at least with btrfs), and input events may be
      delayed.
      
      So let's revert this. It's important to note that the semantics of
      using `system_wq` previously mean that `btree_io_wq` should be created
      before and destroyed after other bcache wqs to keep the same
      assumptions.
      
      Cc: Coly Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.4+
      Signed-off-by: NKai Krakow <kai@kaishome.de>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      9f233ffe
    • K
      bcache: Fix register_device_aync typo · d7fae7b4
      Kai Krakow 提交于
      Should be `register_device_async`.
      
      Cc: Coly Li <colyli@suse.de>
      Signed-off-by: NKai Krakow <kai@kaishome.de>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      d7fae7b4
    • D
      bcache: consider the fragmentation when update the writeback rate · 71dda2a5
      dongdong tao 提交于
      Current way to calculate the writeback rate only considered the
      dirty sectors, this usually works fine when the fragmentation
      is not high, but it will give us unreasonable small rate when
      we are under a situation that very few dirty sectors consumed
      a lot dirty buckets. In some case, the dirty bucekts can reached
      to CUTOFF_WRITEBACK_SYNC while the dirty data(sectors) not even
      reached the writeback_percent, the writeback rate will still
      be the minimum value (4k), thus it will cause all the writes to be
      stucked in a non-writeback mode because of the slow writeback.
      
      We accelerate the rate in 3 stages with different aggressiveness,
      the first stage starts when dirty buckets percent reach above
      BCH_WRITEBACK_FRAGMENT_THRESHOLD_LOW (50), the second is
      BCH_WRITEBACK_FRAGMENT_THRESHOLD_MID (57), the third is
      BCH_WRITEBACK_FRAGMENT_THRESHOLD_HIGH (64). By default
      the first stage tries to writeback the amount of dirty data
      in one bucket (on average) in (1 / (dirty_buckets_percent - 50)) second,
      the second stage tries to writeback the amount of dirty data in one bucket
      in (1 / (dirty_buckets_percent - 57)) * 100 millisecond, the third
      stage tries to writeback the amount of dirty data in one bucket in
      (1 / (dirty_buckets_percent - 64)) millisecond.
      
      the initial rate at each stage can be controlled by 3 configurable
      parameters writeback_rate_fp_term_{low|mid|high}, they are by default
      1, 10, 1000, the hint of IO throughput that these values are trying
      to achieve is described by above paragraph, the reason that
      I choose those value as default is based on the testing and the
      production data, below is some details:
      
      A. When it comes to the low stage, there is still a bit far from the 70
         threshold, so we only want to give it a little bit push by setting the
         term to 1, it means the initial rate will be 170 if the fragment is 6,
         it is calculated by bucket_size/fragment, this rate is very small,
         but still much reasonable than the minimum 8.
         For a production bcache with unheavy workload, if the cache device
         is bigger than 1 TB, it may take hours to consume 1% buckets,
         so it is very possible to reclaim enough dirty buckets in this stage,
         thus to avoid entering the next stage.
      
      B. If the dirty buckets ratio didn't turn around during the first stage,
         it comes to the mid stage, then it is necessary for mid stage
         to be more aggressive than low stage, so i choose the initial rate
         to be 10 times more than low stage, that means 1700 as the initial
         rate if the fragment is 6. This is some normal rate
         we usually see for a normal workload when writeback happens
         because of writeback_percent.
      
      C. If the dirty buckets ratio didn't turn around during the low and mid
         stages, it comes to the third stage, and it is the last chance that
         we can turn around to avoid the horrible cutoff writeback sync issue,
         then we choose 100 times more aggressive than the mid stage, that
         means 170000 as the initial rate if the fragment is 6. This is also
         inferred from a production bcache, I've got one week's writeback rate
         data from a production bcache which has quite heavy workloads,
         again, the writeback is triggered by the writeback percent,
         the highest rate area is around 100000 to 240000, so I believe this
         kind aggressiveness at this stage is reasonable for production.
         And it should be mostly enough because the hint is trying to reclaim
         1000 bucket per second, and from that heavy production env,
         it is consuming 50 bucket per second on average in one week's data.
      
      Option writeback_consider_fragment is to control whether we want
      this feature to be on or off, it's on by default.
      
      Lastly, below is the performance data for all the testing result,
      including the data from production env:
      https://docs.google.com/document/d/1AmbIEa_2MhB9bqhC3rfga9tp7n9YX9PLn0jSUxscVW0/edit?usp=sharingSigned-off-by: Ndongdong tao <dongdong.tao@canonical.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      71dda2a5
  2. 04 2月, 2021 1 次提交
  3. 26 1月, 2021 1 次提交
  4. 25 1月, 2021 4 次提交
  5. 22 1月, 2021 4 次提交
  6. 21 1月, 2021 1 次提交
    • X
      md: Set prev_flush_start and flush_bio in an atomic way · dc5d17a3
      Xiao Ni 提交于
      One customer reports a crash problem which causes by flush request. It
      triggers a warning before crash.
      
              /* new request after previous flush is completed */
              if (ktime_after(req_start, mddev->prev_flush_start)) {
                      WARN_ON(mddev->flush_bio);
                      mddev->flush_bio = bio;
                      bio = NULL;
              }
      
      The WARN_ON is triggered. We use spin lock to protect prev_flush_start and
      flush_bio in md_flush_request. But there is no lock protection in
      md_submit_flush_data. It can set flush_bio to NULL first because of
      compiler reordering write instructions.
      
      For example, flush bio1 sets flush bio to NULL first in
      md_submit_flush_data. An interrupt or vmware causing an extended stall
      happen between updating flush_bio and prev_flush_start. Because flush_bio
      is NULL, flush bio2 can get the lock and submit to underlayer disks. Then
      flush bio1 updates prev_flush_start after the interrupt or extended stall.
      
      Then flush bio3 enters in md_flush_request. The start time req_start is
      behind prev_flush_start. The flush_bio is not NULL(flush bio2 hasn't
      finished). So it can trigger the WARN_ON now. Then it calls INIT_WORK
      again. INIT_WORK() will re-initialize the list pointers in the
      work_struct, which then can result in a corrupted work list and the
      work_struct queued a second time. With the work list corrupted, it can
      lead in invalid work items being used and cause a crash in
      process_one_work.
      
      We need to make sure only one flush bio can be handled at one same time.
      So add spin lock in md_submit_flush_data to protect prev_flush_start and
      flush_bio in an atomic way.
      Reviewed-by: NDavid Jeffery <djeffery@redhat.com>
      Signed-off-by: NXiao Ni <xni@redhat.com>
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      dc5d17a3
  7. 14 1月, 2021 1 次提交
    • I
      dm crypt: defer decryption to a tasklet if interrupts disabled · c87a95dc
      Ignat Korchagin 提交于
      On some specific hardware on early boot we occasionally get:
      
      [ 1193.920255][    T0] BUG: sleeping function called from invalid context at mm/mempool.c:381
      [ 1193.936616][    T0] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 0, name: swapper/69
      [ 1193.953233][    T0] no locks held by swapper/69/0.
      [ 1193.965871][    T0] irq event stamp: 575062
      [ 1193.977724][    T0] hardirqs last  enabled at (575061): [<ffffffffab73f662>] tick_nohz_idle_exit+0xe2/0x3e0
      [ 1194.002762][    T0] hardirqs last disabled at (575062): [<ffffffffab74e8af>] flush_smp_call_function_from_idle+0x4f/0x80
      [ 1194.029035][    T0] softirqs last  enabled at (575050): [<ffffffffad600fd2>] asm_call_irq_on_stack+0x12/0x20
      [ 1194.054227][    T0] softirqs last disabled at (575043): [<ffffffffad600fd2>] asm_call_irq_on_stack+0x12/0x20
      [ 1194.079389][    T0] CPU: 69 PID: 0 Comm: swapper/69 Not tainted 5.10.6-cloudflare-kasan-2021.1.4-dev #1
      [ 1194.104103][    T0] Hardware name: NULL R162-Z12-CD/MZ12-HD4-CD, BIOS R10 06/04/2020
      [ 1194.119591][    T0] Call Trace:
      [ 1194.130233][    T0]  dump_stack+0x9a/0xcc
      [ 1194.141617][    T0]  ___might_sleep.cold+0x180/0x1b0
      [ 1194.153825][    T0]  mempool_alloc+0x16b/0x300
      [ 1194.165313][    T0]  ? remove_element+0x160/0x160
      [ 1194.176961][    T0]  ? blk_mq_end_request+0x4b/0x490
      [ 1194.188778][    T0]  crypt_convert+0x27f6/0x45f0 [dm_crypt]
      [ 1194.201024][    T0]  ? rcu_read_lock_sched_held+0x3f/0x70
      [ 1194.212906][    T0]  ? module_assert_mutex_or_preempt+0x3e/0x70
      [ 1194.225318][    T0]  ? __module_address.part.0+0x1b/0x3a0
      [ 1194.237212][    T0]  ? is_kernel_percpu_address+0x5b/0x190
      [ 1194.249238][    T0]  ? crypt_iv_tcw_ctr+0x4a0/0x4a0 [dm_crypt]
      [ 1194.261593][    T0]  ? is_module_address+0x25/0x40
      [ 1194.272905][    T0]  ? static_obj+0x8a/0xc0
      [ 1194.283582][    T0]  ? lockdep_init_map_waits+0x26a/0x700
      [ 1194.295570][    T0]  ? __raw_spin_lock_init+0x39/0x110
      [ 1194.307330][    T0]  kcryptd_crypt_read_convert+0x31c/0x560 [dm_crypt]
      [ 1194.320496][    T0]  ? kcryptd_queue_crypt+0x1be/0x380 [dm_crypt]
      [ 1194.333203][    T0]  blk_update_request+0x6d7/0x1500
      [ 1194.344841][    T0]  ? blk_mq_trigger_softirq+0x190/0x190
      [ 1194.356831][    T0]  blk_mq_end_request+0x4b/0x490
      [ 1194.367994][    T0]  ? blk_mq_trigger_softirq+0x190/0x190
      [ 1194.379693][    T0]  flush_smp_call_function_queue+0x24b/0x560
      [ 1194.391847][    T0]  flush_smp_call_function_from_idle+0x59/0x80
      [ 1194.403969][    T0]  do_idle+0x287/0x450
      [ 1194.413891][    T0]  ? arch_cpu_idle_exit+0x40/0x40
      [ 1194.424716][    T0]  ? lockdep_hardirqs_on_prepare+0x286/0x3f0
      [ 1194.436399][    T0]  ? _raw_spin_unlock_irqrestore+0x39/0x40
      [ 1194.447759][    T0]  cpu_startup_entry+0x19/0x20
      [ 1194.458038][    T0]  secondary_startup_64_no_verify+0xb0/0xbb
      
      IO completion can be queued to a different CPU by the block subsystem as a "call
      single function/data". The CPU may run these routines from the idle task, but it
      does so with interrupts disabled.
      
      It is not a good idea to do decryption with irqs disabled even in an idle task
      context, so just defer it to a tasklet (as is done with requests from hard irqs).
      
      Fixes: 39d42fa9 ("dm crypt: add flags to optionally bypass kcryptd workqueues")
      Cc: stable@vger.kernel.org # v5.9+
      Signed-off-by: NIgnat Korchagin <ignat@cloudflare.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      c87a95dc
  8. 13 1月, 2021 2 次提交
    • M
      dm integrity: fix the maximum number of arguments · 17ffc193
      Mikulas Patocka 提交于
      Advance the maximum number of arguments from 9 to 15 to account for
      all potential feature flags that may be supplied.
      
      Linux 4.19 added "meta_device"
      (356d9d52) and "recalculate"
      (a3fcf725) flags.
      
      Commit 468dfca3 added
      "sectors_per_bit" and "bitmap_flush_interval".
      
      Commit 84597a44 added
      "allow_discards".
      
      And the commit d537858a added
      "fix_padding".
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org # v4.19+
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      17ffc193
    • I
      dm crypt: do not call bio_endio() from the dm-crypt tasklet · 8e14f610
      Ignat Korchagin 提交于
      Sometimes, when dm-crypt executes decryption in a tasklet, we may get
      "BUG: KASAN: use-after-free in tasklet_action_common.constprop..."
      with a kasan-enabled kernel.
      
      When the decryption fully completes in the tasklet, dm-crypt will call
      bio_endio(), which in turn will call clone_endio() from dm.c core code. That
      function frees the resources associated with the bio, including per bio private
      structures. For dm-crypt it will free the current struct dm_crypt_io, which
      contains our tasklet object, causing use-after-free, when the tasklet is being
      dequeued by the kernel.
      
      To avoid this, do not call bio_endio() from the current tasklet context, but
      delay its execution to the dm-crypt IO workqueue.
      
      Fixes: 39d42fa9 ("dm crypt: add flags to optionally bypass kcryptd workqueues")
      Cc: <stable@vger.kernel.org> # v5.9+
      Signed-off-by: NIgnat Korchagin <ignat@cloudflare.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      8e14f610
  9. 10 1月, 2021 5 次提交
    • C
      bcache: set bcache device into read-only mode for BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET · 5342fd42
      Coly Li 提交于
      If BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET is set in incompat feature
      set, it means the cache device is created with obsoleted layout with
      obso_bucket_site_hi. Now bcache does not support this feature bit, a new
      BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE incompat feature bit is added
      for a better layout to support large bucket size.
      
      For the legacy compatibility purpose, if a cache device created with
      obsoleted BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET feature bit, all bcache
      devices attached to this cache set should be set to read-only. Then the
      dirty data can be written back to backing device before re-create the
      cache device with BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE feature bit
      by the latest bcache-tools.
      
      This patch checks BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET feature bit
      when running a cache set and attach a bcache device to the cache set. If
      this bit is set,
      - When run a cache set, print an error kernel message to indicate all
        following attached bcache device will be read-only.
      - When attach a bcache device, print an error kernel message to indicate
        the attached bcache device will be read-only, and ask users to update
        to latest bcache-tools.
      
      Such change is only for cache device whose bucket size >= 32MB, this is
      for the zoned SSD and almost nobody uses such large bucket size at this
      moment. If you don't explicit set a large bucket size for a zoned SSD,
      such change is totally transparent to your bcache device.
      
      Fixes: ffa47032 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5342fd42
    • C
      bcache: introduce BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE for large bucket · b16671e8
      Coly Li 提交于
      When large bucket feature was added, BCH_FEATURE_INCOMPAT_LARGE_BUCKET
      was introduced into the incompat feature set. It used bucket_size_hi
      (which was added at the tail of struct cache_sb_disk) to extend current
      16bit bucket size to 32bit with existing bucket_size in struct
      cache_sb_disk.
      
      This is not a good idea, there are two obvious problems,
      - Bucket size is always value power of 2, if store log2(bucket size) in
        existing bucket_size of struct cache_sb_disk, it is unnecessary to add
        bucket_size_hi.
      - Macro csum_set() assumes d[SB_JOURNAL_BUCKETS] is the last member in
        struct cache_sb_disk, bucket_size_hi was added after d[] which makes
        csum_set calculate an unexpected super block checksum.
      
      To fix the above problems, this patch introduces a new incompat feature
      bit BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE, when this bit is set, it
      means bucket_size in struct cache_sb_disk stores the order of power-of-2
      bucket size value. When user specifies a bucket size larger than 32768
      sectors, BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE will be set to
      incompat feature set, and bucket_size stores log2(bucket size) more
      than store the real bucket size value.
      
      The obsoleted BCH_FEATURE_INCOMPAT_LARGE_BUCKET won't be used anymore,
      it is renamed to BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET and still only
      recognized by kernel driver for legacy compatible purpose. The previous
      bucket_size_hi is renmaed to obso_bucket_size_hi in struct cache_sb_disk
      and not used in bcache-tools anymore.
      
      For cache device created with BCH_FEATURE_INCOMPAT_LARGE_BUCKET feature,
      bcache-tools and kernel driver still recognize the feature string and
      display it as "obso_large_bucket".
      
      With this change, the unnecessary extra space extend of bcache on-disk
      super block can be avoided, and csum_set() may generate expected check
      sum as well.
      
      Fixes: ffa47032 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
      Signed-off-by: NColy Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.9+
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      b16671e8
    • C
      bcache: check unsupported feature sets for bcache register · 1dfc0686
      Coly Li 提交于
      This patch adds the check for features which is incompatible for
      current supported feature sets.
      
      Now if the bcache device created by bcache-tools has features that
      current kernel doesn't support, read_super() will fail with error
      messoage. E.g. if an unsupported incompatible feature detected,
      bcache register will fail with dmesg "bcache: register_bcache() error :
      Unsupported incompatible feature found".
      
      Fixes: d721a43f ("bcache: increase super block version for cache device and backing device")
      Fixes: ffa47032 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
      Signed-off-by: NColy Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.9+
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1dfc0686
    • C
      bcache: fix typo from SUUP to SUPP in features.h · f7b4943d
      Coly Li 提交于
      This patch fixes the following typos,
      from BCH_FEATURE_COMPAT_SUUP to BCH_FEATURE_COMPAT_SUPP
      from BCH_FEATURE_INCOMPAT_SUUP to BCH_FEATURE_INCOMPAT_SUPP
      from BCH_FEATURE_INCOMPAT_SUUP to BCH_FEATURE_RO_COMPAT_SUPP
      
      Fixes: d721a43f ("bcache: increase super block version for cache device and backing device")
      Fixes: ffa47032 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
      Signed-off-by: NColy Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.9+
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f7b4943d
    • Y
      bcache: set pdev_set_uuid before scond loop iteration · e8092707
      Yi Li 提交于
      There is no need to reassign pdev_set_uuid in the second loop iteration,
      so move it to the place before second loop.
      Signed-off-by: NYi Li <yili@winhong.com>
      Signed-off-by: NColy Li <colyli@suse.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      e8092707
  10. 09 1月, 2021 2 次提交
  11. 07 1月, 2021 1 次提交
    • A
      dm snapshot: flush merged data before committing metadata · fcc42338
      Akilesh Kailash 提交于
      If the origin device has a volatile write-back cache and the following
      events occur:
      
      1: After finishing merge operation of one set of exceptions,
         merge_callback() is invoked.
      2: Update the metadata in COW device tracking the merge completion.
         This update to COW device is flushed cleanly.
      3: System crashes and the origin device's cache where the recent
         merge was completed has not been flushed.
      
      During the next cycle when we read the metadata from the COW device,
      we will skip reading those metadata whose merge was completed in
      step (1). This will lead to data loss/corruption.
      
      To address this, flush the origin device post merge IO before
      updating the metadata.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NAkilesh Kailash <akailash@google.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      fcc42338
  12. 05 1月, 2021 5 次提交
    • I
      dm crypt: use GFP_ATOMIC when allocating crypto requests from softirq · d68b2958
      Ignat Korchagin 提交于
      Commit 39d42fa9 ("dm crypt: add flags to optionally bypass kcryptd
      workqueues") made it possible for some code paths in dm-crypt to be
      executed in softirq context, when the underlying driver processes IO
      requests in interrupt/softirq context.
      
      In this case sometimes when allocating a new crypto request we may get
      a stacktrace like below:
      
      [  210.103008][    C0] BUG: sleeping function called from invalid context at mm/mempool.c:381
      [  210.104746][    C0] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 2602, name: fio
      [  210.106599][    C0] CPU: 0 PID: 2602 Comm: fio Tainted: G        W         5.10.0+ #50
      [  210.108331][    C0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
      [  210.110212][    C0] Call Trace:
      [  210.110921][    C0]  <IRQ>
      [  210.111527][    C0]  dump_stack+0x7d/0xa3
      [  210.112411][    C0]  ___might_sleep.cold+0x122/0x151
      [  210.113527][    C0]  mempool_alloc+0x16b/0x2f0
      [  210.114524][    C0]  ? __queue_work+0x515/0xde0
      [  210.115553][    C0]  ? mempool_resize+0x700/0x700
      [  210.116586][    C0]  ? crypt_endio+0x91/0x180
      [  210.117479][    C0]  ? blk_update_request+0x757/0x1150
      [  210.118513][    C0]  ? blk_mq_end_request+0x4b/0x480
      [  210.119572][    C0]  ? blk_done_softirq+0x21d/0x340
      [  210.120628][    C0]  ? __do_softirq+0x190/0x611
      [  210.121626][    C0]  crypt_convert+0x29f9/0x4c00
      [  210.122668][    C0]  ? _raw_spin_lock_irqsave+0x87/0xe0
      [  210.123824][    C0]  ? kasan_set_track+0x1c/0x30
      [  210.124858][    C0]  ? crypt_iv_tcw_ctr+0x4a0/0x4a0
      [  210.125930][    C0]  ? kmem_cache_free+0x104/0x470
      [  210.126973][    C0]  ? crypt_endio+0x91/0x180
      [  210.127947][    C0]  kcryptd_crypt_read_convert+0x30e/0x420
      [  210.129165][    C0]  blk_update_request+0x757/0x1150
      [  210.130231][    C0]  blk_mq_end_request+0x4b/0x480
      [  210.131294][    C0]  blk_done_softirq+0x21d/0x340
      [  210.132332][    C0]  ? _raw_spin_lock+0x81/0xd0
      [  210.133289][    C0]  ? blk_mq_stop_hw_queue+0x30/0x30
      [  210.134399][    C0]  ? _raw_read_lock_irq+0x40/0x40
      [  210.135458][    C0]  __do_softirq+0x190/0x611
      [  210.136409][    C0]  ? handle_edge_irq+0x221/0xb60
      [  210.137447][    C0]  asm_call_irq_on_stack+0x12/0x20
      [  210.138507][    C0]  </IRQ>
      [  210.139118][    C0]  do_softirq_own_stack+0x37/0x40
      [  210.140191][    C0]  irq_exit_rcu+0x110/0x1b0
      [  210.141151][    C0]  common_interrupt+0x74/0x120
      [  210.142171][    C0]  asm_common_interrupt+0x1e/0x40
      
      Fix this by allocating crypto requests with GFP_ATOMIC mask in
      interrupt context.
      
      Fixes: 39d42fa9 ("dm crypt: add flags to optionally bypass kcryptd workqueues")
      Cc: stable@vger.kernel.org # v5.9+
      Reported-by: NMaciej S. Szmigiero <mail@maciej.szmigiero.name>
      Signed-off-by: NIgnat Korchagin <ignat@cloudflare.com>
      Acked-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      d68b2958
    • I
      dm crypt: do not wait for backlogged crypto request completion in softirq · 8abec36d
      Ignat Korchagin 提交于
      Commit 39d42fa9 ("dm crypt: add flags to optionally bypass kcryptd
      workqueues") made it possible for some code paths in dm-crypt to be
      executed in softirq context, when the underlying driver processes IO
      requests in interrupt/softirq context.
      
      When Crypto API backlogs a crypto request, dm-crypt uses
      wait_for_completion to avoid sending further requests to an already
      overloaded crypto driver. However, if the code is executing in softirq
      context, we might get the following stacktrace:
      
      [  210.235213][    C0] BUG: scheduling while atomic: fio/2602/0x00000102
      [  210.236701][    C0] Modules linked in:
      [  210.237566][    C0] CPU: 0 PID: 2602 Comm: fio Tainted: G        W         5.10.0+ #50
      [  210.239292][    C0] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015
      [  210.241233][    C0] Call Trace:
      [  210.241946][    C0]  <IRQ>
      [  210.242561][    C0]  dump_stack+0x7d/0xa3
      [  210.243466][    C0]  __schedule_bug.cold+0xb3/0xc2
      [  210.244539][    C0]  __schedule+0x156f/0x20d0
      [  210.245518][    C0]  ? io_schedule_timeout+0x140/0x140
      [  210.246660][    C0]  schedule+0xd0/0x270
      [  210.247541][    C0]  schedule_timeout+0x1fb/0x280
      [  210.248586][    C0]  ? usleep_range+0x150/0x150
      [  210.249624][    C0]  ? unpoison_range+0x3a/0x60
      [  210.250632][    C0]  ? ____kasan_kmalloc.constprop.0+0x82/0xa0
      [  210.251949][    C0]  ? unpoison_range+0x3a/0x60
      [  210.252958][    C0]  ? __prepare_to_swait+0xa7/0x190
      [  210.254067][    C0]  do_wait_for_common+0x2ab/0x370
      [  210.255158][    C0]  ? usleep_range+0x150/0x150
      [  210.256192][    C0]  ? bit_wait_io_timeout+0x160/0x160
      [  210.257358][    C0]  ? blk_update_request+0x757/0x1150
      [  210.258582][    C0]  ? _raw_spin_lock_irq+0x82/0xd0
      [  210.259674][    C0]  ? _raw_read_unlock_irqrestore+0x30/0x30
      [  210.260917][    C0]  wait_for_completion+0x4c/0x90
      [  210.261971][    C0]  crypt_convert+0x19a6/0x4c00
      [  210.263033][    C0]  ? _raw_spin_lock_irqsave+0x87/0xe0
      [  210.264193][    C0]  ? kasan_set_track+0x1c/0x30
      [  210.265191][    C0]  ? crypt_iv_tcw_ctr+0x4a0/0x4a0
      [  210.266283][    C0]  ? kmem_cache_free+0x104/0x470
      [  210.267363][    C0]  ? crypt_endio+0x91/0x180
      [  210.268327][    C0]  kcryptd_crypt_read_convert+0x30e/0x420
      [  210.269565][    C0]  blk_update_request+0x757/0x1150
      [  210.270563][    C0]  blk_mq_end_request+0x4b/0x480
      [  210.271680][    C0]  blk_done_softirq+0x21d/0x340
      [  210.272775][    C0]  ? _raw_spin_lock+0x81/0xd0
      [  210.273847][    C0]  ? blk_mq_stop_hw_queue+0x30/0x30
      [  210.275031][    C0]  ? _raw_read_lock_irq+0x40/0x40
      [  210.276182][    C0]  __do_softirq+0x190/0x611
      [  210.277203][    C0]  ? handle_edge_irq+0x221/0xb60
      [  210.278340][    C0]  asm_call_irq_on_stack+0x12/0x20
      [  210.279514][    C0]  </IRQ>
      [  210.280164][    C0]  do_softirq_own_stack+0x37/0x40
      [  210.281281][    C0]  irq_exit_rcu+0x110/0x1b0
      [  210.282286][    C0]  common_interrupt+0x74/0x120
      [  210.283376][    C0]  asm_common_interrupt+0x1e/0x40
      [  210.284496][    C0] RIP: 0010:_aesni_enc1+0x65/0xb0
      
      Fix this by making crypt_convert function reentrant from the point of
      a single bio and make dm-crypt defer further bio processing to a
      workqueue, if Crypto API backlogs a request in interrupt context.
      
      Fixes: 39d42fa9 ("dm crypt: add flags to optionally bypass kcryptd workqueues")
      Cc: stable@vger.kernel.org # v5.9+
      Signed-off-by: NIgnat Korchagin <ignat@cloudflare.com>
      Acked-by: NMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      8abec36d
    • A
      dm zoned: select CONFIG_CRC32 · b690bd54
      Arnd Bergmann 提交于
      Without crc32 support, this driver fails to link:
      
      arm-linux-gnueabi-ld: drivers/md/dm-zoned-metadata.o: in function `dmz_write_sb':
      dm-zoned-metadata.c:(.text+0xe98): undefined reference to `crc32_le'
      arm-linux-gnueabi-ld: drivers/md/dm-zoned-metadata.o: in function `dmz_check_sb':
      dm-zoned-metadata.c:(.text+0x7978): undefined reference to `crc32_le'
      
      Fixes: 3b1a94c8 ("dm zoned: drive-managed zoned block device target")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      b690bd54
    • A
      dm integrity: select CRYPTO_SKCIPHER · f7b347ac
      Anthony Iliopoulos 提交于
      The integrity target relies on skcipher for encryption/decryption, but
      certain kernel configurations may not enable CRYPTO_SKCIPHER, leading to
      compilation errors due to unresolved symbols. Explicitly select
      CRYPTO_SKCIPHER for DM_INTEGRITY, since it is unconditionally dependent
      on it.
      Signed-off-by: NAnthony Iliopoulos <ailiop@suse.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      f7b347ac
    • M
      dm raid: fix discard limits for raid1 · cc07d72b
      Mike Snitzer 提交于
      Block core warned that discard_granularity was 0 for dm-raid with
      personality of raid1.  Reason is that raid_io_hints() was incorrectly
      special-casing raid1 rather than raid0.
      
      Fix raid_io_hints() by removing discard limits settings for
      raid1. Check for raid0 instead.
      
      Fixes: 61697a6a ("dm: eliminate 'split_discard_bios' flag from DM target interface")
      Cc: stable@vger.kernel.org
      Reported-by: NZdenek Kabelac <zkabelac@redhat.com>
      Reported-by: NMikulas Patocka <mpatocka@redhat.com>
      Reported-by: NStephan Bärwolf <stephan@matrixstorm.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      cc07d72b
  13. 29 12月, 2020 1 次提交
  14. 24 12月, 2020 2 次提交
  15. 22 12月, 2020 3 次提交
  16. 15 12月, 2020 2 次提交