1. 01 6月, 2018 3 次提交
  2. 30 5月, 2018 3 次提交
  3. 29 5月, 2018 1 次提交
  4. 25 5月, 2018 9 次提交
  5. 23 5月, 2018 2 次提交
  6. 21 5月, 2018 1 次提交
    • J
      nvme-pci: fix race between poll and IRQ completions · 68fa9dbe
      Jens Axboe 提交于
      If polling completions are racing with the IRQ triggered by a
      completion, the IRQ handler will find no work and return IRQ_NONE.
      This can trigger complaints about spurious interrupts:
      
      [  560.169153] irq 630: nobody cared (try booting with the "irqpoll" option)
      [  560.175988] CPU: 40 PID: 0 Comm: swapper/40 Not tainted 4.17.0-rc2+ #65
      [  560.175990] Hardware name: Intel Corporation S2600STB/S2600STB, BIOS SE5C620.86B.00.01.0010.010920180151 01/09/2018
      [  560.175991] Call Trace:
      [  560.175994]  <IRQ>
      [  560.176005]  dump_stack+0x5c/0x7b
      [  560.176010]  __report_bad_irq+0x30/0xc0
      [  560.176013]  note_interrupt+0x235/0x280
      [  560.176020]  handle_irq_event_percpu+0x51/0x70
      [  560.176023]  handle_irq_event+0x27/0x50
      [  560.176026]  handle_edge_irq+0x6d/0x180
      [  560.176031]  handle_irq+0xa5/0x110
      [  560.176036]  do_IRQ+0x41/0xc0
      [  560.176042]  common_interrupt+0xf/0xf
      [  560.176043]  </IRQ>
      [  560.176050] RIP: 0010:cpuidle_enter_state+0x9b/0x2b0
      [  560.176052] RSP: 0018:ffffa0ed4659fe98 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffdd
      [  560.176055] RAX: ffff9527beb20a80 RBX: 000000826caee491 RCX: 000000000000001f
      [  560.176056] RDX: 000000826caee491 RSI: 00000000335206ee RDI: 0000000000000000
      [  560.176057] RBP: 0000000000000001 R08: 00000000ffffffff R09: 0000000000000008
      [  560.176059] R10: ffffa0ed4659fe78 R11: 0000000000000001 R12: ffff9527beb29358
      [  560.176060] R13: ffffffffa235d4b8 R14: 0000000000000000 R15: 000000826caed593
      [  560.176065]  ? cpuidle_enter_state+0x8b/0x2b0
      [  560.176071]  do_idle+0x1f4/0x260
      [  560.176075]  cpu_startup_entry+0x6f/0x80
      [  560.176080]  start_secondary+0x184/0x1d0
      [  560.176085]  secondary_startup_64+0xa5/0xb0
      [  560.176088] handlers:
      [  560.178387] [<00000000efb612be>] nvme_irq [nvme]
      [  560.183019] Disabling IRQ #630
      
      A previous commit removed ->cqe_seen that was handling this case,
      but we need to handle this a bit differently due to completions
      now running outside the queue lock. Return IRQ_HANDLED from the
      IRQ handler, if the completion ring head was moved since we last
      saw it.
      
      Fixes: 5cb525c8 ("nvme-pci: handle completions outside of the queue lock")
      Reported-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Tested-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      68fa9dbe
  7. 19 5月, 2018 7 次提交
  8. 12 5月, 2018 1 次提交
  9. 07 5月, 2018 1 次提交
  10. 03 5月, 2018 5 次提交
  11. 02 5月, 2018 1 次提交
  12. 27 4月, 2018 5 次提交
  13. 12 4月, 2018 1 次提交
    • J
      nvme: expand nvmf_check_if_ready checks · bb06ec31
      James Smart 提交于
      The nvmf_check_if_ready() checks that were added are very simplistic.
      As such, the routine allows a lot of cases to fail ios during windows
      of reset or re-connection. In cases where there are not multi-path
      options present, the error goes back to the callee - the filesystem
      or application. Not good.
      
      The common routine was rewritten and calling syntax slightly expanded
      so that per-transport is_ready routines don't need to be present.
      The transports now call the routine directly. The routine is now a
      fabrics routine rather than an inline function.
      
      The routine now looks at controller state to decide the action to
      take. Some states mandate io failure. Others define the condition where
      a command can be accepted.  When the decision is unclear, a generic
      queue-or-reject check is made to look for failfast or multipath ios and
      only fails the io if it is so marked. Otherwise, the io will be queued
      and wait for the controller state to resolve.
      
      Admin commands issued via ioctl share a live admin queue with commands
      from the transport for controller init. The ioctls could be intermixed
      with the initialization commands. It's possible for the ioctl cmd to
      be issued prior to the controller being enabled. To block this, the
      ioctl admin commands need to be distinguished from admin commands used
      for controller init. Added a USERCMD nvme_req(req)->rq_flags bit to
      reflect this division and set it on ioctls requests.  As the
      nvmf_check_if_ready() routine is called prior to nvme_setup_cmd(),
      ensure that commands allocated by the ioctl path (actually anything
      in core.c) preps the nvme_req(req) before starting the io. This will
      preserve the USERCMD flag during execution and/or retry.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.e>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      bb06ec31