1. 26 1月, 2020 1 次提交
    • D
      IB/mlx5: Return the administrative GUID if exists · 4bbd4923
      Danit Goldberg 提交于
      A user can change the operational GUID (a.k.a affective GUID) through
      link/infiniband. Therefore it is preferred to return the currently set
      GUID if it exists instead of the operational.
      
      This way the PF can query which VF GUID will be set in the next bind.  In
      order to align with MAC address, zero is returned if administrative GUID
      is not set.
      
      For example, before setting administrative GUID:
       $ ip link show
       ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP mode DEFAULT group default qlen 256
       link/infiniband 00:00:00:08:fe:80:00:00:00:00:00:00:52:54:00:c0:fe:12:34:55 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
       vf 0     link/infiniband 00:00:00:08:fe:80:00:00:00:00:00:00:52:54:00:c0:fe:12:34:55 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff,
       spoof checking off, NODE_GUID 00:00:00:00:00:00:00:00, PORT_GUID 00:00:00:00:00:00:00:00, link-state auto, trust off, query_rss off
      
      Then:
      
       $ ip link set ib0 vf 0 node_guid 11:00:af:21:cb:05:11:00
       $ ip link set ib0 vf 0 port_guid 22:11:af:21:cb:05:11:00
      
      After setting administrative GUID:
       $ ip link show
       ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP mode DEFAULT group default qlen 256
       link/infiniband 00:00:00:08:fe:80:00:00:00:00:00:00:52:54:00:c0:fe:12:34:55 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
       vf 0     link/infiniband 00:00:00:08:fe:80:00:00:00:00:00:00:52:54:00:c0:fe:12:34:55 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff,
       spoof checking off, NODE_GUID 11:00:af:21:cb:05:11:00, PORT_GUID 22:11:af:21:cb:05:11:00, link-state auto, trust off, query_rss off
      
      Fixes: 9c0015ef ("IB/mlx5: Implement callbacks for getting VFs GUID attributes")
      Link: https://lore.kernel.org/r/20200116120048.12744-1-leon@kernel.orgSigned-off-by: NDanit Goldberg <danitg@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      4bbd4923
  2. 12 11月, 2019 1 次提交
  3. 02 11月, 2019 1 次提交
  4. 29 10月, 2019 1 次提交
  5. 02 9月, 2019 1 次提交
  6. 22 8月, 2019 1 次提交
    • E
      net/mlx5: Add HV VHCA infrastructure · 87175120
      Eran Ben Elisha 提交于
      HV VHCA is a layer which provides PF to VF communication channel based on
      HyperV PCI config channel. It implements Mellanox's Inter VHCA control
      communication protocol. The protocol contains control block in order to
      pass messages between the PF and VF drivers, and data blocks in order to
      pass actual data.
      
      The infrastructure is agent based. Each agent will be responsible of
      contiguous buffer blocks in the VHCA config space. This infrastructure will
      bind agents to their blocks, and those agents can only access read/write
      the buffer blocks assigned to them. Each agent will provide three
      callbacks (control, invalidate, cleanup). Control will be invoked when
      block-0 is invalidated with a command that concerns this agent. Invalidate
      callback will be invoked if one of the blocks assigned to this agent was
      invalidated. Cleanup will be invoked before the agent is being freed in
      order to clean all of its open resources or deferred works.
      
      Block-0 serves as the control block. All execution commands from the PF
      will be written by the PF over this block. VF will ack on those by
      writing on block-0 as well. Its format is described by struct
      mlx5_hv_vhca_control_block layout.
      Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      87175120
  7. 11 8月, 2019 1 次提交
  8. 08 8月, 2019 1 次提交
  9. 02 8月, 2019 2 次提交
    • G
      net/mlx5: Add flow counter pool · 558101f1
      Gavi Teitz 提交于
      Add a pool of flow counters, based on flow counter bulks, removing the
      need to allocate a new counter via a costly FW command during the flow
      creation process. The time it takes to acquire/release a flow counter
      is cut from ~50 [us] to ~50 [ns].
      
      The pool is part of the mlx5 driver instance, and provides flow
      counters for aging flows. mlx5_fc_create() was modified to provide
      counters for aging flows from the pool by default, and
      mlx5_destroy_fc() was modified to release counters back to the pool
      for later reuse. If bulk allocation is not supported or fails, and for
      non-aging flows, the fallback behavior is to allocate and free
      individual counters.
      
      The pool is comprised of three lists of flow counter bulks, one of
      fully used bulks, one of partially used bulks, and one of unused
      bulks. Counters are provided from the partially used bulks first, to
      help limit bulk fragmentation.
      
      The pool maintains a threshold, and strives to maintain the amount of
      available counters below it. The pool is increased in size when a
      counter acquisition request is made and there are no available
      counters, and it is decreased in size when the last counter in a bulk
      is released and there are more available counters than the threshold.
      All pool size changes are done in the context of the
      acquiring/releasing process.
      
      The value of the threshold is directly correlated to the amount of
      used counters the pool is providing, while constrained by a hard
      maximum, and is recalculated every time a bulk is allocated/freed.
      This ensures that the pool only consumes large amounts of memory for
      available counters if the pool is being used heavily. When fully
      populated and at the hard maximum, the buffer of available counters
      consumes ~40 [MB].
      Signed-off-by: NGavi Teitz <gavi@mellanox.com>
      Reviewed-by: NVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      558101f1
    • G
      net/mlx5: Refactor and optimize flow counter bulk query · 6f06e04b
      Gavi Teitz 提交于
      Towards introducing the ability to allocate bulks of flow counters,
      refactor the flow counter bulk query process, removing functions and
      structs whose names indicated being used for flow counter bulk
      allocation FW commands, despite them actually only being used to
      support bulk querying, and migrate their functionality to correctly
      named functions in their natural location, fs_counters.c.
      
      Additionally, optimize the bulk query process by:
       * Extracting the memory used for the query to mlx5_fc_stats so
         that it is only allocated once, and not for each bulk query.
       * Querying all the counters in one function call.
      Signed-off-by: NGavi Teitz <gavi@mellanox.com>
      Reviewed-by: NVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      6f06e04b
  10. 04 7月, 2019 2 次提交
  11. 02 7月, 2019 3 次提交
  12. 25 6月, 2019 1 次提交
  13. 14 6月, 2019 10 次提交
  14. 01 6月, 2019 2 次提交
  15. 30 4月, 2019 2 次提交
  16. 25 4月, 2019 1 次提交
  17. 03 4月, 2019 3 次提交
  18. 30 3月, 2019 1 次提交
  19. 02 3月, 2019 2 次提交
  20. 16 2月, 2019 1 次提交
  21. 15 2月, 2019 2 次提交