1. 10 7月, 2017 5 次提交
  2. 06 7月, 2017 11 次提交
  3. 04 7月, 2017 5 次提交
  4. 02 7月, 2017 4 次提交
  5. 01 7月, 2017 10 次提交
  6. 29 6月, 2017 2 次提交
    • V
      nvme: Makefile: remove dead build rule · a2b93775
      Valentin Rothberg 提交于
      Remove dead build rule for drivers/nvme/host/scsi.c which has been
      removed by commit ("nvme: Remove SCSI translations").
      Signed-off-by: NValentin Rothberg <vrothberg@suse.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a2b93775
    • M
      blk-mq: map all HWQ also in hyperthreaded system · fe631457
      Max Gurtovoy 提交于
      This patch performs sequential mapping between CPUs and queues.
      In case the system has more CPUs than HWQs then there are still
      CPUs to map to HWQs. In hyperthreaded system, map the unmapped CPUs
      and their siblings to the same HWQ.
      This actually fixes a bug that found unmapped HWQs in a system with
      2 sockets, 18 cores per socket, 2 threads per core (total 72 CPUs)
      running NVMEoF (opens upto maximum of 64 HWQs).
      
      Performance results running fio (72 jobs, 128 iodepth)
      using null_blk (w/w.o patch):
      
      bs      IOPS(read submit_queues=72)   IOPS(write submit_queues=72)   IOPS(read submit_queues=24)  IOPS(write submit_queues=24)
      -----  ----------------------------  ------------------------------ ---------------------------- -----------------------------
      512    4890.4K/4723.5K                 4524.7K/4324.2K                   4280.2K/4264.3K               3902.4K/3909.5K
      1k     4910.1K/4715.2K                 4535.8K/4309.6K                   4296.7K/4269.1K               3906.8K/3914.9K
      2k     4906.3K/4739.7K                 4526.7K/4330.6K                   4301.1K/4262.4K               3890.8K/3900.1K
      4k     4918.6K/4730.7K                 4556.1K/4343.6K                   4297.6K/4264.5K               3886.9K/3893.9K
      8k     4906.4K/4748.9K                 4550.9K/4346.7K                   4283.2K/4268.8K               3863.4K/3858.2K
      16k    4903.8K/4782.6K                 4501.5K/4233.9K                   4292.3K/4282.3K               3773.1K/3773.5K
      32k    4885.8K/4782.4K                 4365.9K/4184.2K                   4307.5K/4289.4K               3780.3K/3687.3K
      64k    4822.5K/4762.7K                 2752.8K/2675.1K                   4308.8K/4312.3K               2651.5K/2655.7K
      128k   2388.5K/2313.8K                 1391.9K/1375.7K                   2142.8K/2152.2K               1395.5K/1374.2K
      Signed-off-by: NMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      fe631457
  7. 28 6月, 2017 3 次提交
    • S
      nvmet-rdma: register ib_client to not deadlock in device removal · f1d4ef7d
      Sagi Grimberg 提交于
      We can deadlock in case we got to a device removal
      event on a queue which is already in the process of
      destroying the cm_id is this is blocking until all
      events on this cm_id will drain. On the other hand
      we cannot guarantee that rdma_destroy_id was invoked
      as we only have indication that the queue disconnect
      flow has been queued (the queue state is updated before
      the realease work has been queued).
      
      So, we leave all the queue removal to a separate ib_client
      to avoid this deadlock as ib_client device removal is in
      a different context than the cm_id itself.
      Reported-by: NShiraz Saleem <shiraz.saleem@intel.com>
      Tested-by: NShiraz Saleem <shiraz.saleem@intel.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      f1d4ef7d
    • J
      nvme_fc: fix error recovery on link down. · 69fa9646
      James Smart 提交于
      Currently, the fc transport invokes nvme_fc_error_recovery() on every
      io in which the transport detects an error.  Which means:
      a) it's really noisy on large io loads that all get hit by a link down.
      b) we repeatively call nvme_stop_queues() even though queues are
       stopped upon the first error or as first steps of reset_work.
      
      Correct by:
      Errors are only meaningful if the controller is in the LIVE state.
      Thus, enact the reset_work only if LIVE. If called repeatively, state
      will have already transitioned.
      There's no need to stop the queues here. Let the first steps of
      reset_work do the queue stopping.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      69fa9646
    • J
      nvmet_fc: fix crashes on bad opcodes · 188f7e8a
      James Smart 提交于
      if a nvme command is issued with an opcode that is not supported by
      the target (example: opcode 21 - detach namespace), the target
      crashes due to a null pointer.
      
      nvmet_req_init() detects the bad opcode and immediately calls the nvme
      command done routine with an error status, allowing the transport to
      send the response. However, the FC transport was aborting the command
      on error, so the abort freed the lldd point, but the rsp transmit path
      referenced it psot the free.
      
      Fix by removing the abort call on nvmet_req_init() failure.
      The completion response will be sent with an error status code.
      
      As the completion path will terminate the io, ensure the data_sg
      lists show an unused state so that teardown paths are successful.
      Signed-off-by: NPaul Ely <Paul.Ely@broadcom.com>
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      188f7e8a