1. 04 3月, 2016 8 次提交
  2. 09 1月, 2016 1 次提交
  3. 06 1月, 2016 1 次提交
  4. 02 12月, 2015 1 次提交
  5. 20 11月, 2015 1 次提交
  6. 07 11月, 2015 1 次提交
  7. 26 8月, 2015 1 次提交
    • J
      mtip32x: fix regression introduced by blk-mq per-hctx flush · 74c9c913
      Jeff Moyer 提交于
      Hi,
      
      After commit f70ced09 (blk-mq: support per-distpatch_queue flush
      machinery), the mtip32xx driver may oops upon module load due to walking
      off the end of an array in mtip_init_cmd.  On initialization of the
      flush_rq, init_request is called with request_index >= the maximum queue
      depth the driver supports.  For mtip32xx, this value is used to index
      into an array.  What this means is that the driver will walk off the end
      of the array, and either oops or cause random memory corruption.
      
      The problem is easily reproduced by doing modprobe/rmmod of the mtip32xx
      driver in a loop.  I can typically reproduce the problem in about 30
      seconds.
      
      Now, in the case of mtip32xx, it actually doesn't support flush/fua, so
      I think we can simply return without doing anything.  In addition, no
      other mq-enabled driver does anything with the request_index passed into
      init_request(), so no other driver is affected.  However, I'm not really
      sure what is expected of drivers.  Ming, what did you envision drivers
      would do when initializing the flush requests?
      Signed-off-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      74c9c913
  8. 24 6月, 2015 1 次提交
  9. 16 6月, 2015 8 次提交
  10. 30 10月, 2014 1 次提交
    • J
      blk-mq: add a 'list' parameter to ->queue_rq() · 74c45052
      Jens Axboe 提交于
      Since we have the notion of a 'last' request in a chain, we can use
      this to have the hardware optimize the issuing of requests. Add
      a list_head parameter to queue_rq that the driver can use to
      temporarily store hw commands for issue when 'last' is true. If we
      are doing a chain of requests, pass in a NULL list for the first
      request to force issue of that immediately, then batch the remainder
      for deferred issue until the last request has been sent.
      
      Instead of adding yet another argument to the hot ->queue_rq path,
      encapsulate the passed arguments in a blk_mq_queue_data structure.
      This is passed as a constant, and has been tested as faster than
      passing 4 (or even 3) args through ->queue_rq. Update drivers for
      the new ->queue_rq() prototype. There are no functional changes
      in this patch for drivers - if they don't use the passed in list,
      then they will just queue requests individually like before.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      74c45052
  11. 05 10月, 2014 1 次提交
    • M
      block: disable entropy contributions for nonrot devices · b277da0a
      Mike Snitzer 提交于
      Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set
      QUEUE_FLAG_NONROT.
      
      Historically, all block devices have automatically made entropy
      contributions.  But as previously stated in commit e2e1a148 ("block: add
      sysfs knob for turning off disk entropy contributions"):
          - On SSD disks, the completion times aren't as random as they
            are for rotational drives. So it's questionable whether they
            should contribute to the random pool in the first place.
          - Calling add_disk_randomness() has a lot of overhead.
      
      There are more reliable sources for randomness than non-rotational block
      devices.  From a security perspective it is better to err on the side of
      caution than to allow entropy contributions from unreliable "random"
      sources.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b277da0a
  12. 23 9月, 2014 3 次提交
  13. 03 9月, 2014 1 次提交
    • R
      blk-mq: pass along blk_mq_alloc_tag_set return values · dc501dc0
      Robert Elliott 提交于
      Two of the blk-mq based drivers do not pass back the return value
      from blk_mq_alloc_tag_set, instead just returning -ENOMEM.
      
      blk_mq_alloc_tag_set returns -EINVAL if the number of queues or
      queue depth is bad.  -ENOMEM implies that retrying after freeing some
      memory might be more successful, but that won't ever change
      in the -EINVAL cases.
      
      Change the null_blk and mtip32xx drivers to pass along
      the return value.
      Signed-off-by: NRobert Elliott <elliott@hp.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      dc501dc0
  14. 13 8月, 2014 1 次提交
  15. 07 6月, 2014 1 次提交
    • S
      mtip32xx: minor performance enhancements · f45c40a9
      Sam Bradshaw 提交于
      This patch adds the following:
      
      1) Compiler hinting in the fast path.
      2) A prefetch of port->flags to eliminate moderate cpu stalling later
      in mtip_hw_submit_io().
      3) Eliminate a redundant rq_data_dir().
      4) Reorder members of driver_data to eliminate false cacheline sharing
      between irq_workers_active and unal_qdepth.
      
      With some workload and topology configurations, I'm seeing ~1.5%
      throughput improvement in small block random read benchmarks as well
      as improved latency std. dev.
      Signed-off-by: NSam Bradshaw <sbradshaw@micron.com>
      
      Add include of <linux/prefetch.h>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      f45c40a9
  16. 05 6月, 2014 1 次提交
  17. 21 5月, 2014 1 次提交
  18. 14 5月, 2014 3 次提交
  19. 23 4月, 2014 3 次提交
  20. 18 4月, 2014 1 次提交