1. 06 1月, 2021 7 次提交
    • I
      nvmet-rdma: Fix list_del corruption on queue establishment failure · 9ceb7863
      Israel Rukshin 提交于
      When a queue is in NVMET_RDMA_Q_CONNECTING state, it may has some
      requests at rsp_wait_list. In case a disconnect occurs at this
      state, no one will empty this list and will return the requests to
      free_rsps list. Normally nvmet_rdma_queue_established() free those
      requests after moving the queue to NVMET_RDMA_Q_LIVE state, but in
      this case __nvmet_rdma_queue_disconnect() is called before. The
      crash happens at nvmet_rdma_free_rsps() when calling
      list_del(&rsp->free_list), because the request exists only at
      the wait list. To fix the issue, simply clear rsp_wait_list when
      destroying the queue.
      Signed-off-by: NIsrael Rukshin <israelr@nvidia.com>
      Reviewed-by: NMax Gurtovoy <mgurtovoy@nvidia.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      9ceb7863
    • M
      nvme: unexport functions with no external caller · 9b66fc02
      Minwoo Im 提交于
      There are no callers for nvme_reset_ctrl_sync() and
      nvme_alloc_request_qid() so that we keep the symbols exported.
      
      Unexport those functions, mark them static and update the header file
      respectively.
      Signed-off-by: NMinwoo Im <minwoo.im.dev@gmail.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      9b66fc02
    • L
      nvme: avoid possible double fetch in handling CQE · 62df8016
      Lalithambika Krishnakumar 提交于
      While handling the completion queue, keep a local copy of the command id
      from the DMA-accessible completion entry. This silences a time-of-check
      to time-of-use (TOCTOU) warning from KF/x[1], with respect to a
      Thunderclap[2] vulnerability analysis. The double-read impact appears
      benign.
      
      There may be a theoretical window for @command_id to be used as an
      adversary-controlled array-index-value for mounting a speculative
      execution attack, but that mitigation is saved for a potential follow-on.
      A man-in-the-middle attack on the data payload is out of scope for this
      analysis and is hopefully mitigated by filesystem integrity mechanisms.
      
      [1] https://github.com/intel/kernel-fuzzer-for-xen-project
      [2] http://thunderclap.io/thunderclap-paper-ndss2019.pdfSigned-off-by: NLalithambika Krishna Kumar <lalithambika.krishnakumar@intel.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      62df8016
    • S
      nvme-tcp: Fix possible race of io_work and direct send · 5c11f7d9
      Sagi Grimberg 提交于
      We may send a request (with or without its data) from two paths:
      
        1. From our I/O context nvme_tcp_io_work which is triggered from:
          - queue_rq
          - r2t reception
          - socket data_ready and write_space callbacks
        2. Directly from queue_rq if the send_list is empty (because we want to
           save the context switch associated with scheduling our io_work).
      
      However, given that now we have the send_mutex, we may run into a race
      condition where none of these contexts will send the pending payload to
      the controller. Both io_work send path and queue_rq send path
      opportunistically attempt to acquire the send_mutex however queue_rq only
      attempts to send a single request, and if io_work context fails to
      acquire the send_mutex it will complete without rescheduling itself.
      
      The race can trigger with the following sequence:
      
        1. queue_rq sends request (no incapsule data) and blocks
        2. RX path receives r2t - prepares data PDU to send, adds h2cdata PDU
           to the send_list and schedules io_work
        3. io_work triggers and cannot acquire the send_mutex - because of (1),
           ends without self rescheduling
        4. queue_rq completes the send, and completes
      
      ==> no context will send the h2cdata - timeout.
      
      Fix this by having queue_rq sending as much as it can from the send_list
      such that if it still has any left, its because the socket buffer is
      full and the socket write_space callback will trigger, thus guaranteeing
      that a context will be scheduled to send the h2cdata PDU.
      
      Fixes: db5ad6b7 ("nvme-tcp: try to send request in queue_rq context")
      Reported-by: NPotnuri Bharat Teja <bharat@chelsio.com>
      Reported-by: NSamuel Jones <sjones@kalrayinc.com>
      Signed-off-by: NSagi Grimberg <sagi@grimberg.me>
      Tested-by: NPotnuri Bharat Teja <bharat@chelsio.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      5c11f7d9
    • G
      nvme-pci: mark Samsung PM1725a as IGNORE_DEV_SUBNQN · 7ee5c78c
      Gopal Tiwari 提交于
      A system with more than one of these SSDs will only have one usable.
      Hence the kernel fails to detect nvme devices due to duplicate cntlids.
      
      [    6.274554] nvme nvme1: Duplicate cntlid 33 with nvme0, rejecting
      [    6.274566] nvme nvme1: Removing after probe failure status: -22
      
      Adding the NVME_QUIRK_IGNORE_DEV_SUBNQN quirk to resolves the issue.
      Signed-off-by: NGopal Tiwari <gtiwari@redhat.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      7ee5c78c
    • J
      nvme-fcloop: Fix sscanf type and list_first_entry_or_null warnings · 2b54996b
      James Smart 提交于
      Kernel robot had the following warnings:
      
      >> fcloop.c:1506:6: warning: %x in format string (no. 1) requires
      >> 'unsigned int *' but the argument type is 'signed int *'.
      >> [invalidScanfArgType_int]
      >>    if (sscanf(buf, "%x:%d:%d", &opcode, &starting, &amount) != 3)
      >>        ^
      
      Resolve by changing opcode from and int to an unsigned int
      
      and
      
      >>  fcloop.c:1632:32: warning: Uninitialized variable: lport [uninitvar]
      >>     ret = __wait_localport_unreg(lport);
      >>                                  ^
      
      >>  fcloop.c:1615:28: warning: Uninitialized variable: nport [uninitvar]
      >>     ret = __remoteport_unreg(nport, rport);
      >>                              ^
      
      These aren't actual issues as the values are assigned prior to use.
      It appears the tool doesn't understand list_first_entry_or_null().
      Regardless, quiet the tool by initializing the pointers to NULL at
      declaration.
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      2b54996b
    • J
      nvme-fc: avoid calling _nvme_fc_abort_outstanding_ios from interrupt context · 19fce047
      James Smart 提交于
      Recent patches changed calling sequences. nvme_fc_abort_outstanding_ios
      used to be called from a timeout or work context. Now it is being called
      in an io completion context, which can be an interrupt handler.
      Unfortunately, the abort outstanding ios routine attempts to stop nvme
      queues and nested routines that may try to sleep, which is in conflict
      with the interrupt handler.
      
      Correct replacing the direct call with a work element scheduling, and the
      abort outstanding ios routine will be called in the work element.
      
      Fixes: 95ced8a2 ("nvme-fc: eliminate terminate_io use by nvme_fc_error_recovery")
      Signed-off-by: NJames Smart <james.smart@broadcom.com>
      Reported-by: NDaniel Wagner <dwagner@suse.de>
      Tested-by: NDaniel Wagner <dwagner@suse.de>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      19fce047
  2. 08 12月, 2020 1 次提交
  3. 05 12月, 2020 1 次提交
  4. 02 12月, 2020 23 次提交
  5. 18 11月, 2020 1 次提交
  6. 16 11月, 2020 3 次提交
  7. 14 11月, 2020 3 次提交
  8. 13 11月, 2020 1 次提交