1. 30 3月, 2018 15 次提交
  2. 28 3月, 2018 1 次提交
  3. 26 3月, 2018 18 次提交
  4. 09 3月, 2018 3 次提交
  5. 07 3月, 2018 1 次提交
  6. 02 3月, 2018 2 次提交
    • M
      nvme: pci: pass max vectors as num_possible_cpus() to pci_alloc_irq_vectors · 16ccfff2
      Ming Lei 提交于
      84676c1f ("genirq/affinity: assign vectors to all possible CPUs")
      has switched to do irq vectors spread among all possible CPUs, so
      pass num_possible_cpus() as max vecotrs to be assigned.
      
      For example, in a 8 cores system, 0~3 online, 4~8 offline/not present,
      see 'lscpu':
      
              [ming@box]$lscpu
              Architecture:          x86_64
              CPU op-mode(s):        32-bit, 64-bit
              Byte Order:            Little Endian
              CPU(s):                4
              On-line CPU(s) list:   0-3
              Thread(s) per core:    1
              Core(s) per socket:    2
              Socket(s):             2
              NUMA node(s):          2
              ...
              NUMA node0 CPU(s):     0-3
              NUMA node1 CPU(s):
              ...
      
      1) before this patch, follows the allocated vectors and their affinity:
      	irq 47, cpu list 0,4
      	irq 48, cpu list 1,6
      	irq 49, cpu list 2,5
      	irq 50, cpu list 3,7
      
      2) after this patch, follows the allocated vectors and their affinity:
      	irq 43, cpu list 0
      	irq 44, cpu list 1
      	irq 45, cpu list 2
      	irq 46, cpu list 3
      	irq 47, cpu list 4
      	irq 48, cpu list 6
      	irq 49, cpu list 5
      	irq 50, cpu list 7
      
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Sagi Grimberg <sagi@grimberg.me>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NMing Lei <ming.lei@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      16ccfff2
    • W
      nvme-pci: Fix EEH failure on ppc · 651438bb
      Wen Xiong 提交于
      Triggering PPC EEH detection and handling requires a memory mapped read
      failure. The NVMe driver removed the periodic health check MMIO, so
      there's no early detection mechanism to trigger the recovery. Instead,
      the detection now happens when the nvme driver handles an IO timeout
      event. This takes the pci channel offline, so we do not want the driver
      to proceed with escalating its own recovery efforts that may conflict
      with the EEH handler.
      
      This patch ensures the driver will observe the channel was set to offline
      after a failed MMIO read and resets the IO timer so the EEH handler has
      a chance to recover the device.
      Signed-off-by: NWen Xiong <wenxiong@linux.vnet.ibm.com>
      [updated change log]
      Signed-off-by: NKeith Busch <keith.busch@intel.com>
      651438bb