1. 29 3月, 2023 1 次提交
  2. 15 3月, 2023 1 次提交
  3. 04 1月, 2023 1 次提交
  4. 12 12月, 2022 1 次提交
  5. 02 11月, 2022 5 次提交
  6. 13 7月, 2022 2 次提交
  7. 31 5月, 2022 2 次提交
  8. 21 3月, 2022 1 次提交
  9. 08 3月, 2022 2 次提交
    • Y
      blk-mq: decrease pending_queues when it expires · 326d641b
      Yu Kuai 提交于
      hulk inclusion
      category: performance
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4S8DW
      
      ---------------------------
      
      If pending_queues is increased once, it will only be decreased when
      nr_active is zero, and that will lead to the under-utilization of
      host tags because pending_queues is non-zero and the available
      tags for the queue will be max(host tags / active_queues, 4)
      instead of the needed tags of the queue.
      
      Fix it by adding an expiration time for the increasement of pending_queues,
      and decrease it when it expires, so pending_queues will be decreased
      to zero if there is no tag allocation failure, and the available tags
      for the queue will be the whole host tags.
      Signed-off-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      326d641b
    • Y
      blk-mq: allow hardware queue to get more tag while sharing a tag set · c6f9c0e2
      Yu Kuai 提交于
      hulk inclusion
      category: performance
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4S8DW
      
      ---------------------------
      
      When sharing a tag set, if most disks are issuing small amount of IO, and
      only a few is issuing a large amount of IO. Current approach is to limit
      the max amount of tags a disk can get equally to the average of total
      tags. Thus the few heavy load disk can't get enough tags while many tags
      are still free in the tag set.
      
      We add 'pending_queues' in blk_mq_tag_set to count how many queues can't
      get driver tag. Thus if this value is zero, there is no need to limit
      the max number of available tags.
      
      On the other hand, if a queue doesn't issue IO, the 'active_queues' will
      not be decreased in a period of time(request timeout), thus a lot of tags
      will not be available because max number of available tags is set to
      max(total tags / active_queues, 4). Thus we decreased it when
      'nr_active' is 0.
      
      This functionality is enabled by default, to disable it, add
      "blk_mq.unfair_dtag=0" to boot cmd.
      Signed-off-by: NYu Kuai <yukuai3@huawei.com>
      Reviewed-by: NHou Tao <houtao1@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      c6f9c0e2
  10. 31 12月, 2021 1 次提交
  11. 10 12月, 2021 1 次提交
  12. 06 12月, 2021 2 次提交
  13. 15 11月, 2021 2 次提交
  14. 19 10月, 2021 2 次提交
  15. 12 10月, 2021 1 次提交
  16. 27 1月, 2021 2 次提交
  17. 05 12月, 2020 2 次提交
    • M
      block: fix incorrect branching in blk_max_size_offset() · 65f33b35
      Mike Snitzer 提交于
      If non-zero 'chunk_sectors' is passed in to blk_max_size_offset() that
      override will be incorrectly ignored.
      
      Old blk_max_size_offset() branching, prior to commit 3ee16db3,
      must be used only if passed 'chunk_sectors' override is zero.
      
      Fixes: 3ee16db3 ("dm: fix IO splitting")
      Cc: stable@vger.kernel.org # 5.9
      Reported-by: NJohn Dorminy <jdorminy@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      65f33b35
    • M
      dm: fix IO splitting · 3ee16db3
      Mike Snitzer 提交于
      Commit 882ec4e6 ("dm table: stack 'chunk_sectors' limit to account
      for target-specific splitting") caused a couple regressions:
      1) Using lcm_not_zero() when stacking chunk_sectors was a bug because
         chunk_sectors must reflect the most limited of all devices in the
         IO stack.
      2) DM targets that set max_io_len but that do _not_ provide an
         .iterate_devices method no longer had there IO split properly.
      
      And commit 5091cdec ("dm: change max_io_len() to use
      blk_max_size_offset()") also caused a regression where DM no longer
      supported varied (per target) IO splitting. The implication being the
      potential for severely reduced performance for IO stacks that use a DM
      target like dm-cache to hide performance limitations of a slower
      device (e.g. one that requires 4K IO splitting).
      
      Coming full circle: Fix all these issues by discontinuing stacking
      chunk_sectors up using ti->max_io_len in dm_calculate_queue_limits(),
      add optional chunk_sectors override argument to blk_max_size_offset()
      and update DM's max_io_len() to pass ti->max_io_len to its
      blk_max_size_offset() call.
      
      Passing in an optional chunk_sectors override to blk_max_size_offset()
      allows for code reuse of block's centralized calculation for max IO
      size based on provided offset and split boundary.
      
      Fixes: 882ec4e6 ("dm table: stack 'chunk_sectors' limit to account for target-specific splitting")
      Fixes: 5091cdec ("dm: change max_io_len() to use blk_max_size_offset()")
      Cc: stable@vger.kernel.org
      Reported-by: NJohn Dorminy <jdorminy@redhat.com>
      Reported-by: NBruce Johnston <bjohnsto@redhat.com>
      Reported-by: NKirill Tkhai <ktkhai@virtuozzo.com>
      Reviewed-by: NJohn Dorminy <jdorminy@redhat.com>
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Reviewed-by: NJens Axboe <axboe@kernel.dk>
      3ee16db3
  18. 17 10月, 2020 1 次提交
  19. 07 10月, 2020 2 次提交
  20. 06 10月, 2020 4 次提交
  21. 25 9月, 2020 4 次提交