1. 19 2月, 2017 2 次提交
  2. 14 2月, 2017 2 次提交
  3. 25 1月, 2017 19 次提交
  4. 13 1月, 2017 8 次提交
  5. 11 1月, 2017 4 次提交
    • S
      RDMA: Adding ethertype ETH_P_IBOE · 69ae5439
      Selvin Xavier 提交于
      Update the if_ether.h with the  ethertype for Infiniband over
      Ethernet packets. Also, removing the occurances of 0x8915
      from infiniband vendor drivers.
      Signed-off-by: NSelvin Xavier <selvin.xavier@broadcom.com>
      Reviewed-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      69ae5439
    • S
      iw_cxgb4: do not send RX_DATA_ACK CPLs after close/abort · 3bcf96e0
      Steve Wise 提交于
      Function rx_data(), which handles ingress CPL_RX_DATA messages, was
      always sending an RX_DATA_ACK with the goal of updating the credits.
      However, if the RDMA connection is moved out of FPDU mode abruptly,
      then it is possible for iw_cxgb4 to process queued RX_DATA CPLs after HW
      has aborted the connection.  These CPLs should not trigger RX_DATA_ACKS.
      If they do, HW can see a READ after DELETE of the DB_LE hash entry for
      the tid and post a LE_DB HashTblMemCrcError.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      3bcf96e0
    • S
      iw_cxgb4: free EQ queue memory on last deref · c12a67fe
      Steve Wise 提交于
      Commit ad61a4c7 ("iw_cxgb4: don't block in destroy_qp awaiting
      the last deref") introduced a bug where the RDMA QP EQ queue memory
      (and QIDs) are possibly freed before the underlying connection has been
      fully shutdown.  The result being a possible DMA read issued by HW after
      the queue memory has been unmapped and freed.  This results in possible
      WR corruption in the worst case, system bus errors if an IOMMU is in use,
      and SGE "bad WR" errors reported in the very least.  The fix is to defer
      unmap/free of queue memory and QID resources until the QP struct has
      been fully dereferenced.  To do this, the c4iw_ucontext must also be kept
      around until the last QP that references it is fully freed.  In addition,
      since the last QP deref can happen in an IRQ disabled context, we need
      a new workqueue thread to do the final unmap/free of the EQ queue memory.
      
      Fixes: ad61a4c7 ("iw_cxgb4: don't block in destroy_qp awaiting the last deref")
      Cc: stable@vger.kernel.org
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      c12a67fe
    • S
      iw_cxgb4: refactor sq/rq drain logic · 4fe7c296
      Steve Wise 提交于
      With the addition of the IB/Core drain API, iw_cxgb4 supported drain
      by watching the CQs when the QP was out of RTS and signalling "drain
      complete" when the last CQE is polled.  This, however, doesn't fully
      support the drain semantics. Namely, the drain logic is supposed to signal
      "drain complete" only when the application has _processed_ the last CQE,
      not just removed them from the CQ.  Thus a small timing hole exists that
      can cause touch after free type bugs in applications using the drain API
      (nvmf, iSER, for example).  So iw_cxgb4 needs a better solution.
      
      The iWARP Verbs spec mandates that "_at some point_ after the QP is
      moved to ERROR", the iWARP driver MUST synchronously fail post_send and
      post_recv calls.  iw_cxgb4 was currently not allowing any posts once the
      QP is in ERROR.  This was in part due to the fact that the HW queues for
      the QP in ERROR state are disabled at this point, so there wasn't much
      else to do but fail the post operation synchronously.  This restriction
      is what drove the first drain implementation in iw_cxgb4 that has the
      above mentioned flaw.
      
      This patch changes iw_cxgb4 to allow post_send and post_recv WRs after
      the QP is moved to ERROR state for kernel mode users, thus still adhering
      to the Verbs spec for user mode users, but allowing flush WRs for kernel
      users.  Since the HW queues are disabled, we just synthesize a CQE for
      this post, queue it to the SW CQ, and then call the CQ event handler.
      This enables proper drain operations for the various storage applications.
      Signed-off-by: NSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      4fe7c296
  6. 30 12月, 2016 1 次提交
    • J
      net/mlx4_core: Fix raw qp flow steering rules under SRIOV · 10b1c04e
      Jack Morgenstein 提交于
      Demoting simple flow steering rule priority (for DPDK) was achieved by
      wrapping FW commands MLX4_QP_FLOW_STEERING_ATTACH/DETACH for the PF
      as well, and forcing the priority to MLX4_DOMAIN_NIC in the wrapper
      function for the PF and all VFs.
      
      In function mlx4_ib_create_flow(), this change caused the main rule
      creation for the PF to be wrapped, while it left the associated
      tunnel steering rule creation unwrapped for the PF.
      
      This mismatch caused rule deletion failures in mlx4_ib_destroy_flow()
      for the PF when the detach wrapper function did not find the associated
      tunnel-steering rule (since creation of that rule for the PF did not
      go through the wrapper function).
      
      Fix this by setting MLX4_QP_FLOW_STEERING_ATTACH/DETACH to be "native"
      (so that the PF invocation does not go through the wrapper), and perform
      the required priority demotion for the PF in the mlx4_ib_create_flow()
      code path.
      
      Fixes: 48564135 ("net/mlx4_core: Demote simple multicast and broadcast flow steering rules")
      Signed-off-by: NJack Morgenstein <jackm@dev.mellanox.co.il>
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      10b1c04e
  7. 23 12月, 2016 4 次提交