1. 20 4月, 2017 1 次提交
    • M
      mtip32xx: pass BLK_MQ_F_NO_SCHED · 4981d04d
      Ming Lei 提交于
      The recent introduced MQ IO scheduler breaks mtip32xx in the
      following way.
      
      mtip32xx use the 'request_index' passed to .init_request() as
      hardware tag index for initializing hardware queue, and it
      actually require that rq->tag is always same with 'request_index'
      passed to .init_request(). Current blk-mq IO scheduler can't
      guarantee this point, so this patch passes BLK_MQ_F_NO_SCHED
      and at least make mtip32xx working.
      
      This patch fixes the following strange hardware failure. The
      issue can be triggered easily when doing I/O with mq-deadline
      enabled.
      
      [  186.972578] {1}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 32993
      [  186.972578] {1}[Hardware Error]: event severity: fatal
      [  186.972579] {1}[Hardware Error]:  Error 0, type: fatal
      [  186.972580] {1}[Hardware Error]:   section_type: PCIe error
      [  186.972580] {1}[Hardware Error]:   port_type: 0, PCIe end point
      [  186.972581] {1}[Hardware Error]:   version: 1.0
      [  186.972581] {1}[Hardware Error]:   command: 0x0407, status: 0x0010
      [  186.972582] {1}[Hardware Error]:   device_id: 0000:07:00.0
      [  186.972582] {1}[Hardware Error]:   slot: 4
      [  186.972583] {1}[Hardware Error]:   secondary_bus: 0x00
      [  186.972583] {1}[Hardware Error]:   vendor_id: 0x1344, device_id: 0x5150
      [  186.972584] {1}[Hardware Error]:   class_code: 008001
      [  186.972585] Kernel panic - not syncing: Fatal hardware error!
      Reported-by: NJozef Mikovic <jmikovic@redhat.com>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4981d04d
  2. 01 12月, 2016 1 次提交
  3. 12 11月, 2016 1 次提交
  4. 15 9月, 2016 1 次提交
  5. 29 8月, 2016 1 次提交
  6. 28 6月, 2016 1 次提交
    • D
      block: convert to device_add_disk() · 0d52c756
      Dan Williams 提交于
      For block drivers that specify a parent device, convert them to use
      device_add_disk().
      
      This conversion was done with the following semantic patch:
      
          @@
          struct gendisk *disk;
          expression E;
          @@
      
          - disk->driverfs_dev = E;
          ...
          - add_disk(disk);
          + device_add_disk(E, disk);
      
          @@
          struct gendisk *disk;
          expression E1, E2;
          @@
      
          - disk->driverfs_dev = E1;
          ...
          E2 = disk;
          ...
          - add_disk(E2);
          + device_add_disk(E1, E2);
      
      ...plus some manual fixups for a few missed conversions.
      
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Martin K. Petersen <martin.petersen@oracle.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      0d52c756
  7. 08 6月, 2016 1 次提交
  8. 13 4月, 2016 2 次提交
  9. 19 3月, 2016 1 次提交
  10. 04 3月, 2016 10 次提交
  11. 09 1月, 2016 1 次提交
  12. 06 1月, 2016 1 次提交
  13. 02 12月, 2015 1 次提交
  14. 20 11月, 2015 1 次提交
  15. 07 11月, 2015 1 次提交
  16. 26 8月, 2015 1 次提交
    • J
      mtip32x: fix regression introduced by blk-mq per-hctx flush · 74c9c913
      Jeff Moyer 提交于
      Hi,
      
      After commit f70ced09 (blk-mq: support per-distpatch_queue flush
      machinery), the mtip32xx driver may oops upon module load due to walking
      off the end of an array in mtip_init_cmd.  On initialization of the
      flush_rq, init_request is called with request_index >= the maximum queue
      depth the driver supports.  For mtip32xx, this value is used to index
      into an array.  What this means is that the driver will walk off the end
      of the array, and either oops or cause random memory corruption.
      
      The problem is easily reproduced by doing modprobe/rmmod of the mtip32xx
      driver in a loop.  I can typically reproduce the problem in about 30
      seconds.
      
      Now, in the case of mtip32xx, it actually doesn't support flush/fua, so
      I think we can simply return without doing anything.  In addition, no
      other mq-enabled driver does anything with the request_index passed into
      init_request(), so no other driver is affected.  However, I'm not really
      sure what is expected of drivers.  Ming, what did you envision drivers
      would do when initializing the flush requests?
      Signed-off-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      74c9c913
  17. 24 6月, 2015 1 次提交
  18. 16 6月, 2015 8 次提交
  19. 30 10月, 2014 1 次提交
    • J
      blk-mq: add a 'list' parameter to ->queue_rq() · 74c45052
      Jens Axboe 提交于
      Since we have the notion of a 'last' request in a chain, we can use
      this to have the hardware optimize the issuing of requests. Add
      a list_head parameter to queue_rq that the driver can use to
      temporarily store hw commands for issue when 'last' is true. If we
      are doing a chain of requests, pass in a NULL list for the first
      request to force issue of that immediately, then batch the remainder
      for deferred issue until the last request has been sent.
      
      Instead of adding yet another argument to the hot ->queue_rq path,
      encapsulate the passed arguments in a blk_mq_queue_data structure.
      This is passed as a constant, and has been tested as faster than
      passing 4 (or even 3) args through ->queue_rq. Update drivers for
      the new ->queue_rq() prototype. There are no functional changes
      in this patch for drivers - if they don't use the passed in list,
      then they will just queue requests individually like before.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      74c45052
  20. 05 10月, 2014 1 次提交
    • M
      block: disable entropy contributions for nonrot devices · b277da0a
      Mike Snitzer 提交于
      Clear QUEUE_FLAG_ADD_RANDOM in all block drivers that set
      QUEUE_FLAG_NONROT.
      
      Historically, all block devices have automatically made entropy
      contributions.  But as previously stated in commit e2e1a148 ("block: add
      sysfs knob for turning off disk entropy contributions"):
          - On SSD disks, the completion times aren't as random as they
            are for rotational drives. So it's questionable whether they
            should contribute to the random pool in the first place.
          - Calling add_disk_randomness() has a lot of overhead.
      
      There are more reliable sources for randomness than non-rotational block
      devices.  From a security perspective it is better to err on the side of
      caution than to allow entropy contributions from unreliable "random"
      sources.
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      b277da0a
  21. 23 9月, 2014 3 次提交