1. 21 9月, 2018 1 次提交
  2. 11 9月, 2018 11 次提交
  3. 06 9月, 2018 1 次提交
  4. 05 9月, 2018 1 次提交
    • M
      IB/mlx5: Change TX affinity assignment in RoCE LAG mode · c6a21c38
      Majd Dibbiny 提交于
      In the current code, the TX affinity is per RoCE device, which can cause
      unfairness between different contexts. e.g. if we open two contexts, and
      each open 10 QPs concurrently, all of the QPs of the first context might
      end up on the first port instead of distributed on the two ports as
      expected
      
      To overcome this unfairness between processes, we maintain per device TX
      affinity, and per process TX affinity.
      
      The allocation algorithm is as follow:
      
      1. Hold two tx_port_affinity atomic variables, one per RoCE device and one
         per ucontext. Both initialized to 0.
      
      2. In mlx5_ib_alloc_ucontext do:
       2.1. ucontext.tx_port_affinity = device.tx_port_affinity
       2.2. device.tx_port_affinity += 1
      
      3. In modify QP INIT2RST:
       3.1. qp.tx_port_affinity = ucontext.tx_port_affinity % MLX5_PORT_NUM
       3.2. ucontext.tx_port_affinity += 1
      Signed-off-by: NMajd Dibbiny <majd@mellanox.com>
      Reviewed-by: NMoni Shoua <monis@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      c6a21c38
  5. 11 8月, 2018 1 次提交
  6. 03 8月, 2018 1 次提交
    • J
      RDMA/netdev: Use priv_destructor for netdev cleanup · 9f49a5b5
      Jason Gunthorpe 提交于
      Now that the unregister_netdev flow for IPoIB no longer relies on external
      code we can now introduce the use of priv_destructor and
      needs_free_netdev.
      
      The rdma_netdev flow is switched to use the netdev common priv_destructor
      instead of the special free_rdma_netdev and the IPOIB ULP adjusted:
       - priv_destructor needs to switch to point to the ULP's destructor
         which will then call the rdma_ndev's in the right order
       - We need to be careful around the error unwind of register_netdev
         as it sometimes calls priv_destructor on failure
       - ULPs need to use ndo_init/uninit to ensure proper ordering
         of failures around register_netdev
      
      Switching to priv_destructor is a necessary pre-requisite to using
      the rtnl new_link mechanism.
      
      The VNIC user for rdma_netdev should also be revised, but that is left for
      another patch.
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      Signed-off-by: NDenis Drozdov <denisd@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      9f49a5b5
  7. 31 7月, 2018 1 次提交
    • J
      IB/uverbs: Add UVERBS_ATTR_FLAGS_IN to the specs language · bccd0622
      Jason Gunthorpe 提交于
      This clearly indicates that the input is a bitwise combination of values
      in an enum, and identifies which enum contains the definition of the bits.
      
      Special accessors are provided that handle the mandatory validation of the
      allowed bits and enforce the correct type for bitwise flags.
      
      If we had introduced this at the start then the kabi would have uniformly
      used u64 data to pass flags, however today there is a mixture of u64 and
      u32 flags. All places are converted to accept both sizes and the accessor
      fixes it. This allows all existing flags to grow to u64 in future without
      any hassle.
      
      Finally all flags are, by definition, optional. If flags are not passed
      the accessor does not fail, but provides a value of zero.
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      Reviewed-by: NLeon Romanovsky <leonro@mellanox.com>
      bccd0622
  8. 27 7月, 2018 1 次提交
  9. 25 7月, 2018 4 次提交
  10. 24 7月, 2018 1 次提交
  11. 11 7月, 2018 2 次提交
  12. 10 7月, 2018 2 次提交
  13. 05 7月, 2018 4 次提交
  14. 04 7月, 2018 2 次提交
  15. 30 6月, 2018 1 次提交
  16. 27 6月, 2018 1 次提交
  17. 26 6月, 2018 1 次提交
    • Y
      IB/mlx5: Add support for drain SQ & RQ · d0e84c0a
      Yishai Hadas 提交于
      This patch follows the logic from ib_core but considers the internal
      device state upon executing the involved commands.
      
      Specifically,
      Upon internal error state modify QP to an error state can be assumed to
      be success as each in-progress WR going to be flushed in error in any
      case as expected by that modify command.
      
      In addition,
      As the drain should never fail the driver makes sure that post_send/recv
      will succeed even if the device is already in an internal error state.
      As such once the driver will supply the simulated/SW CQEs the CQE for
      the drain WR will be handled as well.
      
      In case of an internal error state the CQE for the drain WR may be
      completed as part of the main task that handled the error state or by
      the task that issued the drain WR.
      
      As the above depends on scheduling the code takes the relevant locks and
      actions to make sure that the completion handler for that WR will always
      be called after that the post_send/recv were issued but not in parallel
      to the other task that handles the error flow.
      Signed-off-by: NYishai Hadas <yishaih@mellanox.com>
      Reviewed-by: NMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      d0e84c0a
  18. 22 6月, 2018 1 次提交
  19. 20 6月, 2018 3 次提交