1. 09 6月, 2018 5 次提交
  2. 30 5月, 2018 2 次提交
  3. 29 5月, 2018 1 次提交
  4. 25 5月, 2018 2 次提交
  5. 21 5月, 2018 1 次提交
    • J
      nvme-pci: fix race between poll and IRQ completions · 68fa9dbe
      Jens Axboe 提交于
      If polling completions are racing with the IRQ triggered by a
      completion, the IRQ handler will find no work and return IRQ_NONE.
      This can trigger complaints about spurious interrupts:
      
      [  560.169153] irq 630: nobody cared (try booting with the "irqpoll" option)
      [  560.175988] CPU: 40 PID: 0 Comm: swapper/40 Not tainted 4.17.0-rc2+ #65
      [  560.175990] Hardware name: Intel Corporation S2600STB/S2600STB, BIOS SE5C620.86B.00.01.0010.010920180151 01/09/2018
      [  560.175991] Call Trace:
      [  560.175994]  <IRQ>
      [  560.176005]  dump_stack+0x5c/0x7b
      [  560.176010]  __report_bad_irq+0x30/0xc0
      [  560.176013]  note_interrupt+0x235/0x280
      [  560.176020]  handle_irq_event_percpu+0x51/0x70
      [  560.176023]  handle_irq_event+0x27/0x50
      [  560.176026]  handle_edge_irq+0x6d/0x180
      [  560.176031]  handle_irq+0xa5/0x110
      [  560.176036]  do_IRQ+0x41/0xc0
      [  560.176042]  common_interrupt+0xf/0xf
      [  560.176043]  </IRQ>
      [  560.176050] RIP: 0010:cpuidle_enter_state+0x9b/0x2b0
      [  560.176052] RSP: 0018:ffffa0ed4659fe98 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffdd
      [  560.176055] RAX: ffff9527beb20a80 RBX: 000000826caee491 RCX: 000000000000001f
      [  560.176056] RDX: 000000826caee491 RSI: 00000000335206ee RDI: 0000000000000000
      [  560.176057] RBP: 0000000000000001 R08: 00000000ffffffff R09: 0000000000000008
      [  560.176059] R10: ffffa0ed4659fe78 R11: 0000000000000001 R12: ffff9527beb29358
      [  560.176060] R13: ffffffffa235d4b8 R14: 0000000000000000 R15: 000000826caed593
      [  560.176065]  ? cpuidle_enter_state+0x8b/0x2b0
      [  560.176071]  do_idle+0x1f4/0x260
      [  560.176075]  cpu_startup_entry+0x6f/0x80
      [  560.176080]  start_secondary+0x184/0x1d0
      [  560.176085]  secondary_startup_64+0xa5/0xb0
      [  560.176088] handlers:
      [  560.178387] [<00000000efb612be>] nvme_irq [nvme]
      [  560.183019] Disabling IRQ #630
      
      A previous commit removed ->cqe_seen that was handling this case,
      but we need to handle this a bit differently due to completions
      now running outside the queue lock. Return IRQ_HANDLED from the
      IRQ handler, if the completion ring head was moved since we last
      saw it.
      
      Fixes: 5cb525c8 ("nvme-pci: handle completions outside of the queue lock")
      Reported-by: NKeith Busch <keith.busch@intel.com>
      Reviewed-by: NKeith Busch <keith.busch@intel.com>
      Tested-by: NKeith Busch <keith.busch@intel.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      68fa9dbe
  6. 19 5月, 2018 6 次提交
  7. 12 5月, 2018 2 次提交
  8. 07 5月, 2018 1 次提交
  9. 02 5月, 2018 1 次提交
  10. 27 4月, 2018 2 次提交
  11. 12 4月, 2018 3 次提交
  12. 28 3月, 2018 1 次提交
  13. 26 3月, 2018 3 次提交
  14. 02 3月, 2018 2 次提交
    • M
      nvme: pci: pass max vectors as num_possible_cpus() to pci_alloc_irq_vectors · 16ccfff2
      Ming Lei 提交于
      84676c1f ("genirq/affinity: assign vectors to all possible CPUs")
      has switched to do irq vectors spread among all possible CPUs, so
      pass num_possible_cpus() as max vecotrs to be assigned.
      
      For example, in a 8 cores system, 0~3 online, 4~8 offline/not present,
      see 'lscpu':
      
              [ming@box]$lscpu
              Architecture:          x86_64
              CPU op-mode(s):        32-bit, 64-bit
              Byte Order:            Little Endian
              CPU(s):                4
              On-line CPU(s) list:   0-3
              Thread(s) per core:    1
              Core(s) per socket:    2
              Socket(s):             2
              NUMA node(s):          2
              ...
              NUMA node0 CPU(s):     0-3
              NUMA node1 CPU(s):
              ...
      
      1) before this patch, follows the allocated vectors and their affinity:
      	irq 47, cpu list 0,4
      	irq 48, cpu list 1,6
      	irq 49, cpu list 2,5
      	irq 50, cpu list 3,7
      
      2) after this patch, follows the allocated vectors and their affinity:
      	irq 43, cpu list 0
      	irq 44, cpu list 1
      	irq 45, cpu list 2
      	irq 46, cpu list 3
      	irq 47, cpu list 4
      	irq 48, cpu list 6
      	irq 49, cpu list 5
      	irq 50, cpu list 7
      
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      16ccfff2
    • W
      nvme-pci: Fix EEH failure on ppc · 651438bb
      Wen Xiong 提交于
      Triggering PPC EEH detection and handling requires a memory mapped read
      failure. The NVMe driver removed the periodic health check MMIO, so
      there's no early detection mechanism to trigger the recovery. Instead,
      the detection now happens when the nvme driver handles an IO timeout
      event. This takes the pci channel offline, so we do not want the driver
      to proceed with escalating its own recovery efforts that may conflict
      with the EEH handler.
      
      This patch ensures the driver will observe the channel was set to offline
      after a failed MMIO read and resets the IO timer so the EEH handler has
      a chance to recover the device.
      Signed-off-by: NWen Xiong <wenxiong@linux.vnet.ibm.com>
      [updated change log]
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      651438bb
  15. 26 2月, 2018 1 次提交
  16. 14 2月, 2018 2 次提交
  17. 09 2月, 2018 1 次提交
  18. 26 1月, 2018 1 次提交
  19. 25 1月, 2018 1 次提交
  20. 24 1月, 2018 1 次提交
  21. 18 1月, 2018 1 次提交