1. 26 1月, 2020 1 次提交
    • D
      IB/mlx5: Return the administrative GUID if exists · 4bbd4923
      Danit Goldberg 提交于
      A user can change the operational GUID (a.k.a affective GUID) through
      link/infiniband. Therefore it is preferred to return the currently set
      GUID if it exists instead of the operational.
      
      This way the PF can query which VF GUID will be set in the next bind.  In
      order to align with MAC address, zero is returned if administrative GUID
      is not set.
      
      For example, before setting administrative GUID:
       $ ip link show
       ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP mode DEFAULT group default qlen 256
       link/infiniband 00:00:00:08:fe:80:00:00:00:00:00:00:52:54:00:c0:fe:12:34:55 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
       vf 0     link/infiniband 00:00:00:08:fe:80:00:00:00:00:00:00:52:54:00:c0:fe:12:34:55 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff,
       spoof checking off, NODE_GUID 00:00:00:00:00:00:00:00, PORT_GUID 00:00:00:00:00:00:00:00, link-state auto, trust off, query_rss off
      
      Then:
      
       $ ip link set ib0 vf 0 node_guid 11:00:af:21:cb:05:11:00
       $ ip link set ib0 vf 0 port_guid 22:11:af:21:cb:05:11:00
      
      After setting administrative GUID:
       $ ip link show
       ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP mode DEFAULT group default qlen 256
       link/infiniband 00:00:00:08:fe:80:00:00:00:00:00:00:52:54:00:c0:fe:12:34:55 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
       vf 0     link/infiniband 00:00:00:08:fe:80:00:00:00:00:00:00:52:54:00:c0:fe:12:34:55 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff,
       spoof checking off, NODE_GUID 11:00:af:21:cb:05:11:00, PORT_GUID 22:11:af:21:cb:05:11:00, link-state auto, trust off, query_rss off
      
      Fixes: 9c0015ef ("IB/mlx5: Implement callbacks for getting VFs GUID attributes")
      Link: https://lore.kernel.org/r/20200116120048.12744-1-leon@kernel.orgSigned-off-by: NDanit Goldberg <danitg@mellanox.com>
      Signed-off-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NJason Gunthorpe <jgg@mellanox.com>
      4bbd4923
  2. 17 1月, 2020 2 次提交
  3. 11 1月, 2020 2 次提交
  4. 23 11月, 2019 1 次提交
  5. 14 11月, 2019 1 次提交
    • P
      net/mlx5: Add new chain for netfilter flow table offload · 975b992f
      Paul Blakey 提交于
      Netfilter tables (nftables) implements a software datapath that
      comes after tc ingress datapath. The datapath supports offloading
      such rules via the flow table offload API.
      
      This API is currently only used by NFT and it doesn't provide the
      global priority in regards to tc offload, so we assume offloading such
      rules must come after tc. It does provide a flow table priority
      parameter, so we need to provide some supported priority range.
      
      For that, split fastpath prio to two, flow table offload and tc offload,
      with one dedicated priority chain for flow table offload.
      
      Next patch will re-use the multi chain API to access this chain by
      allowing access to this chain by the fdb_sub_namespace.
      Signed-off-by: NPaul Blakey <paulb@mellanox.com>
      Reviewed-by: NMark Bloch <markb@mellanox.com>
      Acked-by: NPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      975b992f
  6. 12 11月, 2019 1 次提交
  7. 02 11月, 2019 1 次提交
  8. 30 10月, 2019 1 次提交
  9. 29 10月, 2019 1 次提交
  10. 08 10月, 2019 1 次提交
  11. 24 9月, 2019 1 次提交
  12. 06 9月, 2019 1 次提交
  13. 04 9月, 2019 1 次提交
  14. 02 9月, 2019 3 次提交
  15. 28 8月, 2019 1 次提交
  16. 22 8月, 2019 1 次提交
    • E
      net/mlx5: Add HV VHCA infrastructure · 87175120
      Eran Ben Elisha 提交于
      HV VHCA is a layer which provides PF to VF communication channel based on
      HyperV PCI config channel. It implements Mellanox's Inter VHCA control
      communication protocol. The protocol contains control block in order to
      pass messages between the PF and VF drivers, and data blocks in order to
      pass actual data.
      
      The infrastructure is agent based. Each agent will be responsible of
      contiguous buffer blocks in the VHCA config space. This infrastructure will
      bind agents to their blocks, and those agents can only access read/write
      the buffer blocks assigned to them. Each agent will provide three
      callbacks (control, invalidate, cleanup). Control will be invoked when
      block-0 is invalidated with a command that concerns this agent. Invalidate
      callback will be invoked if one of the blocks assigned to this agent was
      invalidated. Cleanup will be invoked before the agent is being freed in
      order to clean all of its open resources or deferred works.
      
      Block-0 serves as the control block. All execution commands from the PF
      will be written by the PF over this block. VF will ack on those by
      writing on block-0 as well. Its format is described by struct
      mlx5_hv_vhca_control_block layout.
      Signed-off-by: NEran Ben Elisha <eranbe@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      87175120
  17. 21 8月, 2019 4 次提交
  18. 14 8月, 2019 1 次提交
  19. 13 8月, 2019 1 次提交
  20. 11 8月, 2019 1 次提交
  21. 10 8月, 2019 3 次提交
  22. 09 8月, 2019 2 次提交
  23. 08 8月, 2019 1 次提交
  24. 04 8月, 2019 1 次提交
  25. 02 8月, 2019 5 次提交
    • G
      net/mlx5: Add flow counter pool · 558101f1
      Gavi Teitz 提交于
      Add a pool of flow counters, based on flow counter bulks, removing the
      need to allocate a new counter via a costly FW command during the flow
      creation process. The time it takes to acquire/release a flow counter
      is cut from ~50 [us] to ~50 [ns].
      
      The pool is part of the mlx5 driver instance, and provides flow
      counters for aging flows. mlx5_fc_create() was modified to provide
      counters for aging flows from the pool by default, and
      mlx5_destroy_fc() was modified to release counters back to the pool
      for later reuse. If bulk allocation is not supported or fails, and for
      non-aging flows, the fallback behavior is to allocate and free
      individual counters.
      
      The pool is comprised of three lists of flow counter bulks, one of
      fully used bulks, one of partially used bulks, and one of unused
      bulks. Counters are provided from the partially used bulks first, to
      help limit bulk fragmentation.
      
      The pool maintains a threshold, and strives to maintain the amount of
      available counters below it. The pool is increased in size when a
      counter acquisition request is made and there are no available
      counters, and it is decreased in size when the last counter in a bulk
      is released and there are more available counters than the threshold.
      All pool size changes are done in the context of the
      acquiring/releasing process.
      
      The value of the threshold is directly correlated to the amount of
      used counters the pool is providing, while constrained by a hard
      maximum, and is recalculated every time a bulk is allocated/freed.
      This ensures that the pool only consumes large amounts of memory for
      available counters if the pool is being used heavily. When fully
      populated and at the hard maximum, the buffer of available counters
      consumes ~40 [MB].
      Signed-off-by: NGavi Teitz <gavi@mellanox.com>
      Reviewed-by: NVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      558101f1
    • E
      net/mlx5: E-Switch, Verify support QoS element type · 6cedde45
      Eli Cohen 提交于
      Check if firmware supports the requested element type before
      attempting to create the element type.
      In addition, explicitly specify the request element type and tsar type.
      Signed-off-by: NEli Cohen <eli@mellanox.com>
      Reviewed-by: NPaul Blakey <paulb@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      6cedde45
    • S
      net/mlx5: Fix offset of tisc bits reserved field · 7761f9ee
      Saeed Mahameed 提交于
      First reserved field is off by one instead of reserved_at_1 it should be
      reserved_at_2, fix that.
      
      Fixes: a12ff35e ("net/mlx5: Introduce TLS TX offload hardware bits and structures")
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Reviewed-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      7761f9ee
    • G
      net/mlx5: Add flow counter bulk allocation hardware bits and command · 8536a6bf
      Gavi Teitz 提交于
      Add a handle to invoke the new FW capability of allocating a bulk of
      flow counters.
      Signed-off-by: NGavi Teitz <gavi@mellanox.com>
      Reviewed-by: NVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      8536a6bf
    • G
      net/mlx5: Refactor and optimize flow counter bulk query · 6f06e04b
      Gavi Teitz 提交于
      Towards introducing the ability to allocate bulks of flow counters,
      refactor the flow counter bulk query process, removing functions and
      structs whose names indicated being used for flow counter bulk
      allocation FW commands, despite them actually only being used to
      support bulk querying, and migrate their functionality to correctly
      named functions in their natural location, fs_counters.c.
      
      Additionally, optimize the bulk query process by:
       * Extracting the memory used for the query to mlx5_fc_stats so
         that it is only allocated once, and not for each bulk query.
       * Querying all the counters in one function call.
      Signed-off-by: NGavi Teitz <gavi@mellanox.com>
      Reviewed-by: NVlad Buslov <vladbu@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      6f06e04b
  26. 26 7月, 2019 1 次提交
    • A
      net/mlx5e: Prevent encap flow counter update async to user query · 90bb7692
      Ariel Levkovich 提交于
      This patch prevents a race between user invoked cached counters
      query and a neighbor last usage updater.
      
      The cached flow counter stats can be queried by calling
      "mlx5_fc_query_cached" which provides the number of bytes and
      packets that passed via this flow since the last time this counter
      was queried.
      It does so by reducting the last saved stats from the current, cached
      stats and then updating the last saved stats with the cached stats.
      It also provide the lastuse value for that flow.
      
      Since "mlx5e_tc_update_neigh_used_value" needs to retrieve the
      last usage time of encapsulation flows, it calls the flow counter
      query method periodically and async to user queries of the flow counter
      using cls_flower.
      This call is causing the driver to update the last reported bytes and
      packets from the cache and therefore, future user queries of the flow
      stats will return lower than expected number for bytes and packets
      since the last saved stats in the driver was updated async to the last
      saved stats in cls_flower.
      
      This causes wrong stats presentation of encapsulation flows to user.
      
      Since the neighbor usage updater only needs the lastuse stats from the
      cached counter, the fix is to use a dedicated lastuse query call that
      returns the lastuse value without synching between the cached stats and
      the last saved stats.
      
      Fixes: f6dfb4c3 ("net/mlx5e: Update neighbour 'used' state using HW flow rules counters")
      Signed-off-by: NAriel Levkovich <lariel@mellanox.com>
      Reviewed-by: NRoi Dayan <roid@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      90bb7692