1. 04 12月, 2018 1 次提交
  2. 30 11月, 2018 3 次提交
    • S
      IB/mlx5: Handle raw delay drop general event · 09e574fa
      Saeed Mahameed 提交于
      Handle FW general event rq delay drop as it was received from FW via mlx5
      notifiers API, instead of handling the processed software version of that
      event. After this patch we can safely remove all software processed FW
      events types and definitions.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      09e574fa
    • S
      IB/mlx5: Handle raw port change event rather than the software version · 134e9349
      Saeed Mahameed 提交于
      Use the FW version of the port change event as forwarded via new mlx5
      notifiers API.
      
      After this patch, processed software version of the port change event
      will become deprecated and will be totally removed in downstream
      patches.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      134e9349
    • S
      IB/mlx5: Use the new mlx5 core notifier API · df097a27
      Saeed Mahameed 提交于
      Remove the deprecated mlx5_interface->event mlx5_ib callback and use new
      mlx5 notifier API to subscribe for mlx5 events.
      
      For native mlx5_ib devices profiles pf_profile/nic_rep_profile register
      the notifier callback mlx5_ib_handle_event which treats the notifier
      context as mlx5_ib_dev.
      
      For vport repesentors, don't register any notifier, same as before, they
      didn't receive any mlx5 events.
      
      For slave port (mlx5_ib_multiport_info) register a different notifier
      callback mlx5_ib_event_slave_port, which knows that the event is coming
      for mlx5_ib_multiport_info and prepares the event job accordingly.
      Before this on the event handler work we had to ask mlx5_core if this is
      a slave port mlx5_core_is_mp_slave(work->dev), now it is not needed
      anymore.
      mlx5_ib_multiport_info notifier registration is done on
      mlx5_ib_bind_slave_port and de-registration is done on
      mlx5_ib_unbind_slave_port.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      df097a27
  3. 21 11月, 2018 2 次提交
  4. 18 10月, 2018 2 次提交
  5. 17 10月, 2018 1 次提交
  6. 11 10月, 2018 1 次提交
  7. 28 9月, 2018 2 次提交
  8. 27 9月, 2018 1 次提交
  9. 26 9月, 2018 5 次提交
  10. 22 9月, 2018 2 次提交
  11. 21 9月, 2018 2 次提交
  12. 11 9月, 2018 11 次提交
  13. 06 9月, 2018 1 次提交
  14. 05 9月, 2018 1 次提交
    • M
      IB/mlx5: Change TX affinity assignment in RoCE LAG mode · c6a21c38
      Majd Dibbiny 提交于
      In the current code, the TX affinity is per RoCE device, which can cause
      unfairness between different contexts. e.g. if we open two contexts, and
      each open 10 QPs concurrently, all of the QPs of the first context might
      end up on the first port instead of distributed on the two ports as
      expected
      
      To overcome this unfairness between processes, we maintain per device TX
      affinity, and per process TX affinity.
      
      The allocation algorithm is as follow:
      
      1. Hold two tx_port_affinity atomic variables, one per RoCE device and one
         per ucontext. Both initialized to 0.
      
      2. In mlx5_ib_alloc_ucontext do:
       2.1. ucontext.tx_port_affinity = device.tx_port_affinity
       2.2. device.tx_port_affinity += 1
      
      3. In modify QP INIT2RST:
       3.1. qp.tx_port_affinity = ucontext.tx_port_affinity % MLX5_PORT_NUM
       3.2. ucontext.tx_port_affinity += 1
      Signed-off-by: NMajd Dibbiny <majd@mellanox.com>
      Reviewed-by: NMoni Shoua <monis@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      c6a21c38
  15. 11 8月, 2018 1 次提交
  16. 03 8月, 2018 1 次提交
    • J
      RDMA/netdev: Use priv_destructor for netdev cleanup · 9f49a5b5
      Jason Gunthorpe 提交于
      Now that the unregister_netdev flow for IPoIB no longer relies on external
      code we can now introduce the use of priv_destructor and
      needs_free_netdev.
      
      The rdma_netdev flow is switched to use the netdev common priv_destructor
      instead of the special free_rdma_netdev and the IPOIB ULP adjusted:
       - priv_destructor needs to switch to point to the ULP's destructor
         which will then call the rdma_ndev's in the right order
       - We need to be careful around the error unwind of register_netdev
         as it sometimes calls priv_destructor on failure
       - ULPs need to use ndo_init/uninit to ensure proper ordering
         of failures around register_netdev
      
      Switching to priv_destructor is a necessary pre-requisite to using
      the rtnl new_link mechanism.
      
      The VNIC user for rdma_netdev should also be revised, but that is left for
      another patch.
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      Signed-off-by: NDenis Drozdov <denisd@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      9f49a5b5
  17. 31 7月, 2018 1 次提交
    • J
      IB/uverbs: Add UVERBS_ATTR_FLAGS_IN to the specs language · bccd0622
      Jason Gunthorpe 提交于
      This clearly indicates that the input is a bitwise combination of values
      in an enum, and identifies which enum contains the definition of the bits.
      
      Special accessors are provided that handle the mandatory validation of the
      allowed bits and enforce the correct type for bitwise flags.
      
      If we had introduced this at the start then the kabi would have uniformly
      used u64 data to pass flags, however today there is a mixture of u64 and
      u32 flags. All places are converted to accept both sizes and the accessor
      fixes it. This allows all existing flags to grow to u64 in future without
      any hassle.
      
      Finally all flags are, by definition, optional. If flags are not passed
      the accessor does not fail, but provides a value of zero.
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      Reviewed-by: NLeon Romanovsky <leonro@mellanox.com>
      bccd0622
  18. 27 7月, 2018 1 次提交
  19. 25 7月, 2018 1 次提交