1. 26 7月, 2016 1 次提交
  2. 27 6月, 2016 1 次提交
    • Y
      net/mlx5: Rate limit tables support · 1466cc5b
      Yevgeny Petrilin 提交于
      Configuring and managing HW rate limit tables.
      The HW holds a table of rate limits, each rate is
      associated with an index in that table.
      Later a Send Queue uses this index to set the rate limit.
      Multiple Send Queues can have the same rate limit, which is
      represented by a single entry in this table.
      Even though a rate can be shared, each queue is being rate
      limited independently of others.
      
      The SW shadow of this table holds the rate itself,
      the index in the HW table and the refcount (number of queues)
      working with this rate.
      
      The exported functions are mlx5_rl_add_rate and mlx5_rl_remove_rate.
      Number of different rates and their values are derived
      from HW capabilities.
      Signed-off-by: NYevgeny Petrilin <yevgenyp@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1466cc5b
  3. 10 6月, 2016 2 次提交
  4. 12 5月, 2016 1 次提交
  5. 05 5月, 2016 1 次提交
    • M
      net/mlx5: Flow steering, Add vport ACL support · efdc810b
      Mohamad Haj Yahia 提交于
      Update the relevant flow steering device structs and commands to
      support vport.
      Update the flow steering core API to receive vport number.
      Add ingress and egress ACL flow table name spaces.
      Add ACL flow table support:
      * ACL (Access Control List) flow table is a table that contains
      only allow/drop steering rules.
      
      * We have two types of ACL flow tables - ingress and egress.
      
      * ACLs handle traffic sent from/to E-Switch FDB table, Ingress refers to
      traffic sent from Vport to E-Switch and Egress refers to traffic sent
      from E-Switch to vport.
      
      * Ingress ACL flow table allow/drop rules is checked against traffic
      sent from VF.
      
      * Egress ACL flow table allow/drop rules is checked against traffic sent
      to VF.
      Signed-off-by: NMohamad Haj Yahia <mohamad@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      efdc810b
  6. 28 4月, 2016 1 次提交
  7. 27 4月, 2016 3 次提交
  8. 22 4月, 2016 1 次提交
    • T
      net/mlx5e: Support RX multi-packet WQE (Striding RQ) · 461017cb
      Tariq Toukan 提交于
      Introduce the feature of multi-packet WQE (RX Work Queue Element)
      referred to as (MPWQE or Striding RQ), in which WQEs are larger
      and serve multiple packets each.
      
      Every WQE consists of many strides of the same size, every received
      packet is aligned to a beginning of a stride and is written to
      consecutive strides within a WQE.
      
      In the regular approach, each regular WQE is big enough to be capable
      of serving one received packet of any size up to MTU or 64K in case of
      device LRO is enabled, making it very wasteful when dealing with
      small packets or device LRO is enabled.
      
      For its flexibility, MPWQE allows a better memory utilization
      (implying improvements in CPU utilization and packet rate) as packets
      consume strides according to their size, preserving the rest of
      the WQE to be available for other packets.
      
      MPWQE default configuration:
      	Num of WQEs	= 16
      	Strides Per WQE = 2048
      	Stride Size	= 64 byte
      
      The default WQEs memory footprint went from 1024*mtu (~1.5MB) to
      16 * 2048 * 64 = 2MB per ring.
      However, HW LRO can now be supported at no additional cost in memory
      footprint, and hence we turn it on by default and get an even better
      performance.
      
      Performance tested on ConnectX4-Lx 50G.
      To isolate the feature under test, the numbers below were measured with
      HW LRO turned off. We verified that the performance just improves when
      LRO is turned back on.
      
      * Netperf single TCP stream:
      - BW raised by 10-15% for representative packet sizes:
        default, 64B, 1024B, 1478B, 65536B.
      
      * Netperf multi TCP stream:
      - No degradation, line rate reached.
      
      * Pktgen: packet rate raised by 2-10% for traffic of different message
      sizes: 64B, 128B, 256B, 1024B, and 1500B.
      
      * Pktgen: packet loss in bursts of small messages (64byte),
      single stream:
      - | num packets | packets loss before | packets loss after
        |     2K      |       ~ 1K          |       0
        |     8K      |       ~ 6K          |       0
        |     16K     |       ~13K          |       0
        |     32K     |       ~28K          |       0
        |     64K     |       ~57K          |     ~24K
      
      As expected as the driver can receive as many small packets (<=64B) as
      the number of total strides in the ring (default = 2048 * 16) vs. 1024
      (default ring size regardless of packets size) before this feature.
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Signed-off-by: NAchiad Shochat <achiad@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      461017cb
  9. 22 3月, 2016 1 次提交
  10. 10 3月, 2016 1 次提交
  11. 01 3月, 2016 2 次提交
  12. 25 2月, 2016 2 次提交
  13. 22 1月, 2016 1 次提交
  14. 12 1月, 2016 1 次提交
  15. 06 1月, 2016 1 次提交
  16. 24 12月, 2015 3 次提交
  17. 04 12月, 2015 5 次提交
  18. 15 10月, 2015 1 次提交
  19. 29 9月, 2015 1 次提交
  20. 25 9月, 2015 1 次提交
    • S
      IB/mlx5: Remove support for IB_DEVICE_LOCAL_DMA_LKEY · c6790aa9
      Sagi Grimberg 提交于
      Commit 96249d70 ("IB/core: Guarantee that a local_dma_lkey
      is available") allows ULPs that make use of the local dma key to keep
      working as before by allocating a DMA MR with local permissions and
      converted these consumers to use the MR associated with the PD
      rather then device->local_dma_lkey.
      
      ConnectIB has some known issues with memory registration
      using the local_dma_lkey (SEND, RDMA, RECV seems to work ok).
      
      Thus don't expose support for it (remove device->local_dma_lkey
      setting), and take advantage of the above commit such that no regression
      is introduced to working systems.
      
      The local_dma_lkey support will be restored in CX4 depending on FW
      capability query.
      Signed-off-by: NSagi Grimberg <sagig@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      c6790aa9
  21. 29 8月, 2015 1 次提交
  22. 07 8月, 2015 1 次提交
  23. 12 6月, 2015 1 次提交
  24. 05 6月, 2015 1 次提交
  25. 31 5月, 2015 3 次提交
  26. 03 4月, 2015 1 次提交
  27. 16 12月, 2014 1 次提交