1. 15 12月, 2018 2 次提交
  2. 11 12月, 2018 1 次提交
  3. 04 12月, 2018 2 次提交
  4. 30 11月, 2018 3 次提交
    • S
      IB/mlx5: Handle raw delay drop general event · 09e574fa
      Saeed Mahameed 提交于
      Handle FW general event rq delay drop as it was received from FW via mlx5
      notifiers API, instead of handling the processed software version of that
      event. After this patch we can safely remove all software processed FW
      events types and definitions.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      09e574fa
    • S
      IB/mlx5: Handle raw port change event rather than the software version · 134e9349
      Saeed Mahameed 提交于
      Use the FW version of the port change event as forwarded via new mlx5
      notifiers API.
      
      After this patch, processed software version of the port change event
      will become deprecated and will be totally removed in downstream
      patches.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      134e9349
    • S
      IB/mlx5: Use the new mlx5 core notifier API · df097a27
      Saeed Mahameed 提交于
      Remove the deprecated mlx5_interface->event mlx5_ib callback and use new
      mlx5 notifier API to subscribe for mlx5 events.
      
      For native mlx5_ib devices profiles pf_profile/nic_rep_profile register
      the notifier callback mlx5_ib_handle_event which treats the notifier
      context as mlx5_ib_dev.
      
      For vport repesentors, don't register any notifier, same as before, they
      didn't receive any mlx5 events.
      
      For slave port (mlx5_ib_multiport_info) register a different notifier
      callback mlx5_ib_event_slave_port, which knows that the event is coming
      for mlx5_ib_multiport_info and prepares the event job accordingly.
      Before this on the event handler work we had to ask mlx5_core if this is
      a slave port mlx5_core_is_mp_slave(work->dev), now it is not needed
      anymore.
      mlx5_ib_multiport_info notifier registration is done on
      mlx5_ib_bind_slave_port and de-registration is done on
      mlx5_ib_unbind_slave_port.
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      df097a27
  5. 22 11月, 2018 1 次提交
  6. 21 11月, 2018 2 次提交
  7. 18 10月, 2018 2 次提交
  8. 17 10月, 2018 1 次提交
  9. 11 10月, 2018 1 次提交
  10. 28 9月, 2018 2 次提交
  11. 27 9月, 2018 1 次提交
  12. 26 9月, 2018 5 次提交
  13. 22 9月, 2018 2 次提交
  14. 21 9月, 2018 2 次提交
  15. 11 9月, 2018 11 次提交
  16. 06 9月, 2018 1 次提交
  17. 05 9月, 2018 1 次提交
    • M
      IB/mlx5: Change TX affinity assignment in RoCE LAG mode · c6a21c38
      Majd Dibbiny 提交于
      In the current code, the TX affinity is per RoCE device, which can cause
      unfairness between different contexts. e.g. if we open two contexts, and
      each open 10 QPs concurrently, all of the QPs of the first context might
      end up on the first port instead of distributed on the two ports as
      expected
      
      To overcome this unfairness between processes, we maintain per device TX
      affinity, and per process TX affinity.
      
      The allocation algorithm is as follow:
      
      1. Hold two tx_port_affinity atomic variables, one per RoCE device and one
         per ucontext. Both initialized to 0.
      
      2. In mlx5_ib_alloc_ucontext do:
       2.1. ucontext.tx_port_affinity = device.tx_port_affinity
       2.2. device.tx_port_affinity += 1
      
      3. In modify QP INIT2RST:
       3.1. qp.tx_port_affinity = ucontext.tx_port_affinity % MLX5_PORT_NUM
       3.2. ucontext.tx_port_affinity += 1
      Signed-off-by: NMajd Dibbiny <majd@mellanox.com>
      Reviewed-by: NMoni Shoua <monis@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      c6a21c38