1. 18 6月, 2014 3 次提交
    • A
      blk-mq: bitmap tag: fix races in bt_get() function · 86fb5c56
      Alexander Gordeev 提交于
      This update fixes few issues in bt_get() function:
      
      - list_empty(&wait.task_list) check is not protected;
      
      - was_empty check is always true which results in *every* thread
        entering the loop resets bt_wait_state::wait_cnt counter rather
        than every bt->wake_cnt'th thread;
      
      - 'bt_wait_state::wait_cnt' counter update is redundant, since
        it also gets reset in bt_clear_tag() function;
      
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Ming Lei <tom.leiming@gmail.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      86fb5c56
    • A
      blk-mq: bitmap tag: fix race on blk_mq_bitmap_tags::wake_cnt · 2971c35f
      Alexander Gordeev 提交于
      This piece of code in bt_clear_tag() function is racy:
      
      	bs = bt_wake_ptr(bt);
      	if (bs && atomic_dec_and_test(&bs->wait_cnt)) {
      		atomic_set(&bs->wait_cnt, bt->wake_cnt);
       		wake_up(&bs->wait);
      	}
      
      Since nothing prevents bt_wake_ptr() from returning the very
      same 'bs' address on multiple CPUs, the following scenario is
      possible:
      
          CPU1                                CPU2
          ----                                ----
      
      0.  bs = bt_wake_ptr(bt);               bs = bt_wake_ptr(bt);
      1.  atomic_dec_and_test(&bs->wait_cnt)
      2.                                      atomic_dec_and_test(&bs->wait_cnt)
      3.  atomic_set(&bs->wait_cnt, bt->wake_cnt);
      
      If the decrement in [1] yields zero then for some amount of time
      the decrement in [2] results in a negative/overflow value, which
      is not expected. The follow-up assignment in [3] overwrites the
      invalid value with the batch value (and likely prevents the issue
      from being severe) which is still incorrect and should be a lesser.
      
      Cc: Ming Lei <tom.leiming@gmail.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      2971c35f
    • A
      blk-mq: bitmap tag: fix races on shared ::wake_index fields · 8537b120
      Alexander Gordeev 提交于
      Fix racy updates of shared blk_mq_bitmap_tags::wake_index
      and blk_mq_hw_ctx::wake_index fields.
      
      Cc: Ming Lei <tom.leiming@gmail.com>
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      8537b120
  2. 14 6月, 2014 2 次提交
  3. 12 6月, 2014 2 次提交
  4. 11 6月, 2014 3 次提交
  5. 10 6月, 2014 1 次提交
  6. 09 6月, 2014 2 次提交
  7. 07 6月, 2014 3 次提交
  8. 06 6月, 2014 3 次提交
    • J
      blk-mq: bump max tag depth to 10K tags · a4391c64
      Jens Axboe 提交于
      For some scsi-mq cases, the tag map can be huge. So increase the
      max number of tags we support.
      
      Additionally, don't fail with EINVAL if a user requests too many
      tags. Warn that the tag depth has been adjusted down, and store
      the new value inside the tag_set passed in.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      a4391c64
    • J
      block: add blk_rq_set_block_pc() · f27b087b
      Jens Axboe 提交于
      With the optimizations around not clearing the full request at alloc
      time, we are leaving some of the needed init for REQ_TYPE_BLOCK_PC
      up to the user allocating the request.
      
      Add a blk_rq_set_block_pc() that sets the command type to
      REQ_TYPE_BLOCK_PC, and properly initializes the members associated
      with this type of request. Update callers to use this function instead
      of manipulating rq->cmd_type directly.
      
      Includes fixes from Christoph Hellwig <hch@lst.de> for my half-assed
      attempt.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f27b087b
    • J
      block: add notion of a chunk size for request merging · 762380ad
      Jens Axboe 提交于
      Some drivers have different limits on what size a request should
      optimally be, depending on the offset of the request. Similar to
      dividing a device into chunks. Add a setting that allows the driver
      to inform the block layer of such a chunk size. The block layer will
      then prevent merging across the chunks.
      
      This is needed to optimally support NVMe with a non-zero stripe size.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      762380ad
  9. 05 6月, 2014 2 次提交
  10. 04 6月, 2014 5 次提交
  11. 31 5月, 2014 4 次提交
  12. 30 5月, 2014 4 次提交
  13. 29 5月, 2014 4 次提交
  14. 28 5月, 2014 2 次提交
反馈
建议
客服 返回
顶部