1. 05 4月, 2012 4 次提交
    • A
      net/mlx4_en: Set max rate-limit for a TC · 109d2446
      Amir Vadai 提交于
      This patch is using the DCB netlink to set rate limit per ETS TC
      Values are accepted in Kbps and rounded up to the nearest multiply of 100Mbps.
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      109d2446
    • A
      net/mlx4_en: DCB QoS support · 564c274c
      Amir Vadai 提交于
      Set TSA, promised BW and PFC using IEEE 802.1qaz netlink commands.
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      564c274c
    • A
      net/mlx4_en: Force user priority by QP attribute · 0e98b523
      Amir Vadai 提交于
      Instead of relying on HW to change schedule queue by UP, schedule
      queue is fixed for a tx_ring, and UP in WQE is ignored in this aspect.  This
      resolves two issues with untagged traffic:
      1. untagged traffic has no UP in packet which is needed for QoS. The change
         above allows setting the schedule queue (and by that the UP) of such a stream.
      2. BlueFlame uses the same field used by vlan tag. So forcing UP from QPC
         allows using BF for untagged but prioritized traffic.
      
      In old firmware that force UP is not supported, untagged traffic will not subject to
      QoS.
      
      Because UP is set by QP, need to always have a tx ring per UP, even if pfcrx
      module paramter is false.
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0e98b523
    • T
      mlx4: allocate just enough pages instead of always 4 pages · 117980c4
      Thadeu Lima de Souza Cascardo 提交于
      The driver uses a 2-order allocation, which is too much on architectures
      like ppc64, which has a 64KiB page. This particular allocation is used
      for large packet fragments that may have a size of 512, 1024, 4096 or
      fill the whole allocation. So, a minimum size of 16384 is good enough
      and will be the same size that is used in architectures of 4KiB sized
      pages.
      
      This will avoid allocation failures that we see when the system is under
      stress, but still has plenty of memory, like the one below.
      
      This will also allow us to set the interface MTU to higher values like
      9000, which was not possible on ppc64 without this patch.
      
      Node 1 DMA: 737*64kB 37*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB = 51904kB
      83137 total pagecache pages
      0 pages in swap cache
      Swap cache stats: add 0, delete 0, find 0/0
      Free swap  = 10420096kB
      Total swap = 10420096kB
      107776 pages RAM
      1184 pages reserved
      147343 pages shared
      28152 pages non-shared
      netstat: page allocation failure. order:2, mode:0x4020
      Call Trace:
      [c0000001a4fa3770] [c000000000012f04] .show_stack+0x74/0x1c0 (unreliable)
      [c0000001a4fa3820] [c00000000016af38] .__alloc_pages_nodemask+0x618/0x930
      [c0000001a4fa39a0] [c0000000001a71a0] .alloc_pages_current+0xb0/0x170
      [c0000001a4fa3a40] [d00000000dcc3e00] .mlx4_en_alloc_frag+0x200/0x240 [mlx4_en]
      [c0000001a4fa3b10] [d00000000dcc3f8c] .mlx4_en_complete_rx_desc+0x14c/0x250 [mlx4_en]
      [c0000001a4fa3be0] [d00000000dcc4eec] .mlx4_en_process_rx_cq+0x62c/0x850 [mlx4_en]
      [c0000001a4fa3d20] [d00000000dcc5150] .mlx4_en_poll_rx_cq+0x40/0x90 [mlx4_en]
      [c0000001a4fa3dc0] [c0000000004e2bb8] .net_rx_action+0x178/0x450
      [c0000001a4fa3eb0] [c00000000009c9b8] .__do_softirq+0x118/0x290
      [c0000001a4fa3f90] [c000000000031df8] .call_do_softirq+0x14/0x24
      [c000000184c3b520] [c00000000000e700] .do_softirq+0xf0/0x110
      [c000000184c3b5c0] [c00000000009c6d4] .irq_exit+0xb4/0xc0
      [c000000184c3b640] [c00000000000e964] .do_IRQ+0x144/0x230
      Signed-off-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
      Signed-off-by: NKleber Sacilotto de Souza <klebers@linux.vnet.ibm.com>
      Tested-by: NKleber Sacilotto de Souza <klebers@linux.vnet.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      117980c4
  2. 07 3月, 2012 3 次提交
  3. 07 2月, 2012 1 次提交
  4. 23 1月, 2012 1 次提交
  5. 19 1月, 2012 1 次提交
  6. 14 12月, 2011 1 次提交
  7. 28 11月, 2011 3 次提交
  8. 15 11月, 2011 1 次提交
  9. 01 11月, 2011 1 次提交
  10. 19 10月, 2011 2 次提交
  11. 10 10月, 2011 3 次提交
  12. 11 8月, 2011 1 次提交
  13. 22 7月, 2011 1 次提交
  14. 16 4月, 2011 1 次提交
  15. 24 3月, 2011 7 次提交
  16. 26 10月, 2010 1 次提交
    • E
      IB/mlx4: Add VLAN support for IBoE · 4c3eb3ca
      Eli Cohen 提交于
      This patch allows IBoE traffic to be encapsulated in 802.1Q tagged
      VLAN frames.  The VLAN tag is encoded in the GID and derived from it
      by a simple computation.
      
      The netdev notifier callback is modified to catch VLAN device
      addition/removal and the port's GID table is updated to reflect the
      change, so that for each netdevice there is an entry in the GID table.
      When the port's GID table is exhausted, GID entries will not be added.
      Only children of the main interfaces can add to the GID table; if a
      VLAN interface is added on another VLAN interface (e.g. "vconfig add
      eth2.6 8"), then that interfaces will not add an entry to the GID
      table.
      Signed-off-by: NEli Cohen <eli@mellanox.co.il>
      Signed-off-by: NRoland Dreier <rolandd@cisco.com>
      4c3eb3ca
  17. 07 9月, 2010 1 次提交
  18. 25 8月, 2010 7 次提交