1. 17 6月, 2021 1 次提交
    • J
      nvme-pci: fix var. type for increasing cq_head · a0aac973
      JK Kim 提交于
      nvmeq->cq_head is compared with nvmeq->q_depth and changed the value
      and cq_phase for handling the next cq db.
      
      but, nvmeq->q_depth's type is u32 and max. value is 0x10000 when
      CQP.MSQE is 0xffff and io_queue_depth is 0x10000.
      
      current temp. variable for comparing with nvmeq->q_depth is overflowed
      when previous nvmeq->cq_head is 0xffff.
      
      in this case, nvmeq->cq_phase is not updated.
      so, fix data type for temp. variable to u32.
      Signed-off-by: NJK Kim <jongkang.kim2@gmail.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      a0aac973
  2. 16 6月, 2021 1 次提交
  3. 03 6月, 2021 1 次提交
  4. 04 5月, 2021 1 次提交
    • T
      nvme-pci: fix controller reset hang when racing with nvme_timeout · d4060d2b
      Tao Chiu 提交于
      reset_work() in nvme-pci may hang forever in the following scenario:
      1) A reset caused by a command timeout occurs due to a controller being
         temporarily irresponsive.
      2) nvme_reset_work() restarts admin queue at nvme_alloc_admin_tags(). At
         the same time, a user-submitted admin command is queued and waiting
         for completion. Then, reset_work() changes its state to CONNECTING,
         and submits an identify command.
      3) However, the controller does still not respond to any command,
         causing a timeout being fired at the user-submitted command.
         Unfortunately, nvme_timeout() does not see the completion on cq, and
         any timeout that takes place under CONNECTING state causes a
         controller shutdown.
      4) Normally, the identify command in reset_work() would be canceled with
         SC_HOST_ABORTED by nvme_dev_disable(), then reset_work can tear down
         the controller accordingly. But the controller happens to return
         online and respond the identify command before nvme_dev_disable()
         should have been reaped it off.
      5) reset_work() continues to setup_io_queues() as it observes no error
         in init_identify(). However, the admin queue has already been
         quiesced in dev_disable(). Thus, any following commands would be
         blocked forever in blk_execute_rq().
      
      This can be fixed by restricting usercmd commands when controller is not
      in a LIVE state in nvme_queue_rq(), as what has been done previously in
      fabrics.
      
      ```
      nvme_reset_work():                     |
          nvme_alloc_admin_tags()            |
                                             | nvme_submit_user_cmd():
          nvme_init_identify():              |     ...
              __nvme_submit_sync_cmd():      |
                  ...                        |     ...
      ---------------------------------------> nvme_timeout():
      (Controller starts reponding commands) |     nvme_dev_disable(, true):
          nvme_setup_io_queues():            |
              __nvme_submit_sync_cmd():      |
                  (hung in blk_execute_rq    |
                   since run_hw_queue sees   |
                   queue quiesced)           |
      
      ```
      Signed-off-by: NTao Chiu <taochiu@synology.com>
      Signed-off-by: NCody Wong <codywong@synology.com>
      Reviewed-by: NLeon Chien <leonchien@synology.com>
      Reviewed-by: NKeith Busch <kbusch@kernel.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      d4060d2b
  5. 15 4月, 2021 2 次提交
  6. 03 4月, 2021 5 次提交
  7. 11 3月, 2021 1 次提交
    • D
      nvme-pci: add the DISABLE_WRITE_ZEROES quirk for a Samsung PM1725a · abbb5f59
      Dmitry Monakhov 提交于
      This adds a quirk for Samsung PM1725a drive which fixes timeouts and
      I/O errors due to the fact that the controller does not properly
      handle the Write Zeroes command, dmesg log:
      
      nvme nvme0: I/O 528 QID 10 timeout, aborting
      nvme nvme0: I/O 529 QID 10 timeout, aborting
      nvme nvme0: I/O 530 QID 10 timeout, aborting
      nvme nvme0: I/O 531 QID 10 timeout, aborting
      nvme nvme0: I/O 532 QID 10 timeout, aborting
      nvme nvme0: I/O 533 QID 10 timeout, aborting
      nvme nvme0: I/O 534 QID 10 timeout, aborting
      nvme nvme0: I/O 535 QID 10 timeout, aborting
      nvme nvme0: Abort status: 0x0
      nvme nvme0: Abort status: 0x0
      nvme nvme0: Abort status: 0x0
      nvme nvme0: Abort status: 0x0
      nvme nvme0: Abort status: 0x0
      nvme nvme0: Abort status: 0x0
      nvme nvme0: Abort status: 0x0
      nvme nvme0: Abort status: 0x0
      nvme nvme0: I/O 528 QID 10 timeout, reset controller
      nvme nvme0: controller is down; will reset: CSTS=0x3, PCI_STATUS=0x10
      nvme nvme0: Device not ready; aborting reset, CSTS=0x3
      nvme nvme0: Device not ready; aborting reset, CSTS=0x3
      nvme nvme0: Removing after probe failure status: -19
      nvme0n1: detected capacity change from 6251233968 to 0
      blk_update_request: I/O error, dev nvme0n1, sector 32776 op 0x1:(WRITE) flags 0x3000 phys_seg 6 prio class 0
      blk_update_request: I/O error, dev nvme0n1, sector 113319936 op 0x9:(WRITE_ZEROES) flags 0x800 phys_seg 0 prio class 0
      Buffer I/O error on dev nvme0n1p2, logical block 1, lost async page write
      blk_update_request: I/O error, dev nvme0n1, sector 113319680 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      Buffer I/O error on dev nvme0n1p2, logical block 2, lost async page write
      blk_update_request: I/O error, dev nvme0n1, sector 113319424 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      Buffer I/O error on dev nvme0n1p2, logical block 3, lost async page write
      blk_update_request: I/O error, dev nvme0n1, sector 113319168 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      Buffer I/O error on dev nvme0n1p2, logical block 4, lost async page write
      blk_update_request: I/O error, dev nvme0n1, sector 113318912 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      Buffer I/O error on dev nvme0n1p2, logical block 5, lost async page write
      blk_update_request: I/O error, dev nvme0n1, sector 113318656 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      Buffer I/O error on dev nvme0n1p2, logical block 6, lost async page write
      blk_update_request: I/O error, dev nvme0n1, sector 113318400 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      blk_update_request: I/O error, dev nvme0n1, sector 113318144 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      blk_update_request: I/O error, dev nvme0n1, sector 113317888 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      Signed-off-by: NDmitry Monakhov <dmtrmonakhov@yandex-team.ru>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      abbb5f59
  8. 05 3月, 2021 3 次提交
  9. 26 2月, 2021 1 次提交
  10. 10 2月, 2021 1 次提交
  11. 02 2月, 2021 2 次提交
  12. 29 1月, 2021 1 次提交
    • C
      nvme-pci: add the DISABLE_WRITE_ZEROES quirk for a SPCC device · 89919929
      Chaitanya Kulkarni 提交于
      This adds a quirk for SPCC 256GB NVMe 1.3 drive which fixes timeouts and
      I/O errors due to the fact that the controller does not properly
      handle the Write Zeroes command:
      
      [ 2745.659527] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G            E 5.10.6-BET #1
      [ 2745.659528] Hardware name: System manufacturer System Product Name/PRIME X570-P, BIOS 3001 12/04/2020
      [ 2776.138874] nvme nvme1: I/O 414 QID 3 timeout, aborting
      [ 2776.138886] nvme nvme1: I/O 415 QID 3 timeout, aborting
      [ 2776.138891] nvme nvme1: I/O 416 QID 3 timeout, aborting
      [ 2776.138895] nvme nvme1: I/O 417 QID 3 timeout, aborting
      [ 2776.138912] nvme nvme1: Abort status: 0x0
      [ 2776.138921] nvme nvme1: I/O 428 QID 3 timeout, aborting
      [ 2776.138922] nvme nvme1: Abort status: 0x0
      [ 2776.138925] nvme nvme1: Abort status: 0x0
      [ 2776.138974] nvme nvme1: Abort status: 0x0
      [ 2776.138977] nvme nvme1: Abort status: 0x0
      [ 2806.346792] nvme nvme1: I/O 414 QID 3 timeout, reset controller
      [ 2806.363566] nvme nvme1: 15/0/0 default/read/poll queues
      [ 2836.554298] nvme nvme1: I/O 415 QID 3 timeout, disable controller
      [ 2836.672064] blk_update_request: I/O error, dev nvme1n1, sector 16350 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672072] blk_update_request: I/O error, dev nvme1n1, sector 16093 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672074] blk_update_request: I/O error, dev nvme1n1, sector 15836 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672076] blk_update_request: I/O error, dev nvme1n1, sector 15579 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672078] blk_update_request: I/O error, dev nvme1n1, sector 15322 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672080] blk_update_request: I/O error, dev nvme1n1, sector 15065 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672082] blk_update_request: I/O error, dev nvme1n1, sector 14808 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672083] blk_update_request: I/O error, dev nvme1n1, sector 14551 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672085] blk_update_request: I/O error, dev nvme1n1, sector 14294 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672087] blk_update_request: I/O error, dev nvme1n1, sector 14037 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672121] nvme nvme1: failed to mark controller live state
      [ 2836.672123] nvme nvme1: Removing after probe failure status: -19
      [ 2836.689016] Aborting journal on device dm-0-8.
      [ 2836.689024] Buffer I/O error on dev dm-0, logical block 25198592, lost sync page write
      [ 2836.689027] JBD2: Error -5 detected when updating journal superblock for dm-0-8.
      Reported-by: NBradley Chapman <chapman6235@comcast.net>
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Tested-by: NBradley Chapman <chapman6235@comcast.net>
      Reviewed-by: NKeith Busch <kbusch@kernel.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      89919929
  13. 25 1月, 2021 1 次提交
  14. 21 1月, 2021 2 次提交
  15. 19 1月, 2021 1 次提交
  16. 06 1月, 2021 2 次提交
  17. 02 12月, 2020 5 次提交
    • N
      nvme-pci: don't allocate unused I/O queues · e3aef095
      Niklas Schnelle 提交于
      currently the NVME_QUIRK_SHARED_TAGS quirk for Apple devices is handled
      during the assignment of nr_io_queues in nvme_setup_io_queues().
      This however means that for these devices nvme_max_io_queues() will
      actually not return the supported maximum which is confusing and
      unexpected and also means that in nvme_probe() we are allocating
      for I/O queues that will never be used.
      Fix this by moving the quirk handling into nvme_max_io_queues().
      Signed-off-by: NNiklas Schnelle <schnelle@linux.ibm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      e3aef095
    • N
      nvme-pci: drop min() from nr_io_queues assignment · ff4e5fba
      Niklas Schnelle 提交于
      in nvme_setup_io_queues() the number of I/O queues is set to either 1 in
      case of a quirky Apple device or to the min of nvme_max_io_queues() or
      dev->nr_allocated_queues - 1.
      This is unnecessarily complicated as dev->nr_allocated_queues is only
      assigned once and is nvme_max_io_queues() + 1.
      Signed-off-by: NNiklas Schnelle <schnelle@linux.ibm.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      ff4e5fba
    • C
      nvme: split nvme_alloc_request() · 39dfe844
      Chaitanya Kulkarni 提交于
      Right now nvme_alloc_request() allocates a request from block layer
      based on the value of the qid. When qid set to NVME_QID_ANY it used
      blk_mq_alloc_request() else blk_mq_alloc_request_hctx().
      
      The function nvme_alloc_request() is called from different context, The
      only place where it uses non NVME_QID_ANY value is for fabrics connect
      commands :-
      
      nvme_submit_sync_cmd()		NVME_QID_ANY
      nvme_features()			NVME_QID_ANY
      nvme_sec_submit()		NVME_QID_ANY
      nvmf_reg_read32()		NVME_QID_ANY
      nvmf_reg_read64()		NVME_QID_ANY
      nvmf_reg_write32()		NVME_QID_ANY
      nvmf_connect_admin_queue()	NVME_QID_ANY
      nvme_submit_user_cmd()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_keep_alive()		NVME_QID_ANY
      	nvme_alloc_request()
      nvme_timeout()			NVME_QID_ANY
      	nvme_alloc_request()
      nvme_delete_queue()		NVME_QID_ANY
      	nvme_alloc_request()
      nvmet_passthru_execute_cmd()	NVME_QID_ANY
      	nvme_alloc_request()
      nvmf_connect_io_queue() 	QID
      	__nvme_submit_sync_cmd()
      		nvme_alloc_request()
      
      With passthru nvme_alloc_request() now falls into the I/O fast path such
      that blk_mq_alloc_request_hctx() is never gets called and that adds
      additional branch check in fast path.
      
      Split the nvme_alloc_request() into nvme_alloc_request() and
      nvme_alloc_request_qid().
      
      Replace each call of the nvme_alloc_request() with NVME_QID_ANY param
      with a call to newly added nvme_alloc_request() without NVME_QID_ANY.
      
      Replace a call to nvme_alloc_request() with QID param with a call to
      newly added nvme_alloc_request() and nvme_alloc_request_qid()
      based on the qid value set in the __nvme_submit_sync_cmd().
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NLogan Gunthorpe <logang@deltatee.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      39dfe844
    • C
      nvme: use consistent macro name for timeout · dc96f938
      Chaitanya Kulkarni 提交于
      This is purely a clenaup patch, add prefix NVME to the ADMIN_TIMEOUT to
      make consistent with NVME_IO_TIMEOUT.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      dc96f938
    • C
      nvme: centralize setting the timeout in nvme_alloc_request · 0d2e7c84
      Chaitanya Kulkarni 提交于
      The function nvme_alloc_request() is called from different context
      (I/O and Admin queue) where callers do not consider the I/O timeout when
      called from I/O queue context.
      
      Update nvme_alloc_request() to set the default I/O and Admin timeout
      value based on whether the queuedata is set or not.
      Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: NSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      0d2e7c84
  18. 14 11月, 2020 1 次提交
  19. 03 11月, 2020 1 次提交
  20. 22 10月, 2020 1 次提交
  21. 27 9月, 2020 2 次提交
  22. 22 9月, 2020 1 次提交
    • X
      nvme-pci: fix NULL req in completion handler · 50b7c243
      Xianting Tian 提交于
      Currently, we use nvmeq->q_depth as the upper limit for a valid tag in
      nvme_handle_cqe(), it is not correct. Because the available tag number
      is recorded in tagset, which is not equal to nvmeq->q_depth.
      
      The nvme driver registers interrupts for queues before initializing the
      tagset, because it uses the number of successful request_irq() calls to
      configure the tagset parameters. This allows a race condition with the
      current tag validity check if the controller happens to produce an
      interrupt with a corrupted CQE before the tagset is initialized.
      
      Replace the driver's indirect tag check with the one already provided by
      the block layer.
      Signed-off-by: NXianting Tian <tian.xianting@h3c.com>
      Reviewed-by: NKeith Busch <kbusch@kernel.org>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      50b7c243
  23. 15 9月, 2020 1 次提交
    • D
      nvme-pci: disable the write zeros command for Intel 600P/P3100 · ce4cc313
      David Milburn 提交于
      The write zeros command does not work with 4k range.
      
      bash-4.4# ./blkdiscard /dev/nvme0n1p2
      bash-4.4# strace -efallocate xfs_io -c "fzero 536895488 2048" /dev/nvme0n1p2
      fallocate(3, FALLOC_FL_ZERO_RANGE, 536895488, 2048) = 0
      +++ exited with 0 +++
      bash-4.4# dd bs=1 if=/dev/nvme0n1p2 skip=536895488 count=512 | hexdump -C
      00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
      *
      00000200
      
      bash-4.4# ./blkdiscard /dev/nvme0n1p2
      bash-4.4# strace -efallocate xfs_io -c "fzero 536895488 4096" /dev/nvme0n1p2
      fallocate(3, FALLOC_FL_ZERO_RANGE, 536895488, 4096) = 0
      +++ exited with 0 +++
      bash-4.4# dd bs=1 if=/dev/nvme0n1p2 skip=536895488 count=512 | hexdump -C
      00000000  5c 61 5c b0 96 21 1b 5e  85 0c 07 32 9c 8c eb 3c  |\a\..!.^...2...<|
      00000010  4a a2 06 ca 67 15 2d 8e  29 8d a8 a0 7e 46 8c 62  |J...g.-.)...~F.b|
      00000020  bb 4c 6c c1 6b f5 ae a5  e4 a9 bc 93 4f 60 ff 7a  |.Ll.k.......O`.z|
      Reported-by: NEric Sandeen <esandeen@redhat.com>
      Signed-off-by: NDavid Milburn <dmilburn@redhat.com>
      Tested-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      ce4cc313
  24. 29 8月, 2020 1 次提交
    • T
      nvme-pci: cancel nvme device request before disabling · 7ad92f65
      Tong Zhang 提交于
      This patch addresses an irq free warning and null pointer dereference
      error problem when nvme devices got timeout error during initialization.
      This problem happens when nvme_timeout() function is called while
      nvme_reset_work() is still in execution. This patch fixed the problem by
      setting flag of the problematic request to NVME_REQ_CANCELLED before
      calling nvme_dev_disable() to make sure __nvme_submit_sync_cmd() returns
      an error code and let nvme_submit_sync_cmd() fail gracefully.
      The following is console output.
      
      [   62.472097] nvme nvme0: I/O 13 QID 0 timeout, disable controller
      [   62.488796] nvme nvme0: could not set timestamp (881)
      [   62.494888] ------------[ cut here ]------------
      [   62.495142] Trying to free already-free IRQ 11
      [   62.495366] WARNING: CPU: 0 PID: 7 at kernel/irq/manage.c:1751 free_irq+0x1f7/0x370
      [   62.495742] Modules linked in:
      [   62.495902] CPU: 0 PID: 7 Comm: kworker/u4:0 Not tainted 5.8.0+ #8
      [   62.496206] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-p4
      [   62.496772] Workqueue: nvme-reset-wq nvme_reset_work
      [   62.497019] RIP: 0010:free_irq+0x1f7/0x370
      [   62.497223] Code: e8 ce 49 11 00 48 83 c4 08 4c 89 e0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 44 89 f6 48 c70
      [   62.498133] RSP: 0000:ffffa96800043d40 EFLAGS: 00010086
      [   62.498391] RAX: 0000000000000000 RBX: ffff9b87fc458400 RCX: 0000000000000000
      [   62.498741] RDX: 0000000000000001 RSI: 0000000000000096 RDI: ffffffff9693d72c
      [   62.499091] RBP: ffff9b87fd4c8f60 R08: ffffa96800043bfd R09: 0000000000000163
      [   62.499440] R10: ffffa96800043bf8 R11: ffffa96800043bfd R12: ffff9b87fd4c8e00
      [   62.499790] R13: ffff9b87fd4c8ea4 R14: 000000000000000b R15: ffff9b87fd76b000
      [   62.500140] FS:  0000000000000000(0000) GS:ffff9b87fdc00000(0000) knlGS:0000000000000000
      [   62.500534] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [   62.500816] CR2: 0000000000000000 CR3: 000000003aa0a000 CR4: 00000000000006f0
      [   62.501165] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [   62.501515] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [   62.501864] Call Trace:
      [   62.501993]  pci_free_irq+0x13/0x20
      [   62.502167]  nvme_reset_work+0x5d0/0x12a0
      [   62.502369]  ? update_load_avg+0x59/0x580
      [   62.502569]  ? ttwu_queue_wakelist+0xa8/0xc0
      [   62.502780]  ? try_to_wake_up+0x1a2/0x450
      [   62.502979]  process_one_work+0x1d2/0x390
      [   62.503179]  worker_thread+0x45/0x3b0
      [   62.503361]  ? process_one_work+0x390/0x390
      [   62.503568]  kthread+0xf9/0x130
      [   62.503726]  ? kthread_park+0x80/0x80
      [   62.503911]  ret_from_fork+0x22/0x30
      [   62.504090] ---[ end trace de9ed4a70f8d71e2 ]---
      [  123.912275] nvme nvme0: I/O 12 QID 0 timeout, disable controller
      [  123.914670] nvme nvme0: 1/0/0 default/read/poll queues
      [  123.916310] BUG: kernel NULL pointer dereference, address: 0000000000000000
      [  123.917469] #PF: supervisor write access in kernel mode
      [  123.917725] #PF: error_code(0x0002) - not-present page
      [  123.917976] PGD 0 P4D 0
      [  123.918109] Oops: 0002 [#1] SMP PTI
      [  123.918283] CPU: 0 PID: 7 Comm: kworker/u4:0 Tainted: G        W         5.8.0+ #8
      [  123.918650] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-48-gd9c812dda519-p4
      [  123.919219] Workqueue: nvme-reset-wq nvme_reset_work
      [  123.919469] RIP: 0010:__blk_mq_alloc_map_and_request+0x21/0x80
      [  123.919757] Code: 66 0f 1f 84 00 00 00 00 00 41 55 41 54 55 48 63 ee 53 48 8b 47 68 89 ee 48 89 fb 8b4
      [  123.920657] RSP: 0000:ffffa96800043d40 EFLAGS: 00010286
      [  123.920912] RAX: ffff9b87fc4fee40 RBX: ffff9b87fc8cb008 RCX: 0000000000000000
      [  123.921258] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9b87fc618000
      [  123.921602] RBP: 0000000000000000 R08: ffff9b87fdc2c4a0 R09: ffff9b87fc616000
      [  123.921949] R10: 0000000000000000 R11: ffff9b87fffd1500 R12: 0000000000000000
      [  123.922295] R13: 0000000000000000 R14: ffff9b87fc8cb200 R15: ffff9b87fc8cb000
      [  123.922641] FS:  0000000000000000(0000) GS:ffff9b87fdc00000(0000) knlGS:0000000000000000
      [  123.923032] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  123.923312] CR2: 0000000000000000 CR3: 000000003aa0a000 CR4: 00000000000006f0
      [  123.923660] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  123.924007] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      [  123.924353] Call Trace:
      [  123.924479]  blk_mq_alloc_tag_set+0x137/0x2a0
      [  123.924694]  nvme_reset_work+0xed6/0x12a0
      [  123.924898]  process_one_work+0x1d2/0x390
      [  123.925099]  worker_thread+0x45/0x3b0
      [  123.925280]  ? process_one_work+0x390/0x390
      [  123.925486]  kthread+0xf9/0x130
      [  123.925642]  ? kthread_park+0x80/0x80
      [  123.925825]  ret_from_fork+0x22/0x30
      [  123.926004] Modules linked in:
      [  123.926158] CR2: 0000000000000000
      [  123.926322] ---[ end trace de9ed4a70f8d71e3 ]---
      [  123.926549] RIP: 0010:__blk_mq_alloc_map_and_request+0x21/0x80
      [  123.926832] Code: 66 0f 1f 84 00 00 00 00 00 41 55 41 54 55 48 63 ee 53 48 8b 47 68 89 ee 48 89 fb 8b4
      [  123.927734] RSP: 0000:ffffa96800043d40 EFLAGS: 00010286
      [  123.927989] RAX: ffff9b87fc4fee40 RBX: ffff9b87fc8cb008 RCX: 0000000000000000
      [  123.928336] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff9b87fc618000
      [  123.928679] RBP: 0000000000000000 R08: ffff9b87fdc2c4a0 R09: ffff9b87fc616000
      [  123.929025] R10: 0000000000000000 R11: ffff9b87fffd1500 R12: 0000000000000000
      [  123.929370] R13: 0000000000000000 R14: ffff9b87fc8cb200 R15: ffff9b87fc8cb000
      [  123.929715] FS:  0000000000000000(0000) GS:ffff9b87fdc00000(0000) knlGS:0000000000000000
      [  123.930106] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [  123.930384] CR2: 0000000000000000 CR3: 000000003aa0a000 CR4: 00000000000006f0
      [  123.930731] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      [  123.931077] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
      Co-developed-by: NKeith Busch <kbusch@kernel.org>
      Signed-off-by: NTong Zhang <ztong0001@gmail.com>
      Reviewed-by: NKeith Busch <kbusch@kernel.org>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      7ad92f65
  25. 24 8月, 2020 1 次提交