1. 18 6月, 2013 16 次提交
  2. 15 6月, 2013 13 次提交
  3. 14 6月, 2013 5 次提交
    • R
      net/mlx4: Add VF link state support · 948e306d
      Rony Efraim 提交于
      Add support to change the link state of VF (vPort)
      Signed-off-by: NRony Efraim <ronye@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      948e306d
    • R
      net/core: Add VF link state control · 1d8faf48
      Rony Efraim 提交于
      Add netlink directives and ndo entry to allow for controling
      VF link, which can be in one of three states:
      
      Auto - VF link state reflects the PF link state (default)
      
      Up - VF link state is up, traffic from VF to VF works even if
      the actual PF link is down
      
      Down - VF link state is down, no traffic from/to this VF, can be of
      use while configuring the VF
      Signed-off-by: NRony Efraim <ronye@mellanox.com>
      Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1d8faf48
    • F
      bcm63xx_enet: add support Broadcom BCM6345 Ethernet · 3dc6475c
      Florian Fainelli 提交于
      This patch adds support for the Broadcom BCM6345 SoC Ethernet. BCM6345
      has a slightly different and older DMA engine which requires the
      following modifications:
      
      - the width of the DMA channels on BCM6345 is 64 bytes vs 16 bytes,
        which means that the helpers enet_dma{c,s} need to account for this
        channel width and we can no longer use macros
      
      - BCM6345 DMA engine does not have any internal SRAM for transfering
        buffers
      
      - BCM6345 buffer allocation and flow control is not per-channel but
        global (done in RSET_ENETDMA)
      
      - the DMA engine bits are right-shifted by 3 compared to other DMA
        generations
      
      - the DMA enable/interrupt masks are a little different (we need to
        enabled more bits for 6345)
      
      - some register have the same meaning but are offsetted in the ENET_DMAC
        space so a lookup table is required to return the proper offset
      
      The MAC itself is identical and requires no modifications to work.
      Signed-off-by: NFlorian Fainelli <florian@openwrt.org>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3dc6475c
    • E
      htb: reorder struct htb_class fields for performance · ca4ec90b
      Eric Dumazet 提交于
      htb_class structures are big, and source of false sharing on SMP.
      
      By carefully splitting them in two parts, we can improve performance.
      
      I got 9 % performance increase on a 24 threads machine, with 200
      concurrent netperf in TCP_RR mode, using a HTB hierarchy of 4 classes.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ca4ec90b
    • W
      net-rps: fixes for rps flow limit · 5f121b9a
      Willem de Bruijn 提交于
      Caught by sparse:
      - __rcu: missing annotation to sd->flow_limit
      - __user: direct access in cpumask_scnprintf
      
      Also
      - add endline character when printing bitmap if room in buffer
      - avoid bucket overflow by reducing FLOW_LIMIT_HISTORY
      
      The last item warrants some explanation. The hashtable buckets are
      subject to overflow if FLOW_LIMIT_HISTORY is larger than or equal
      to bucket size, since all packets may end up in a single bucket. The
      current (rather arbitrary) history value of 256 happens to match the
      buffer size (u8).
      
      As a result, with a single flow, the first 128 packets are accepted
      (correct), the second 128 packets dropped (correct) and then the
      history[] array has filled, so that each subsequent new packet
      causes an increment in the bucket for new_flow plus a decrement
      for old_flow: a steady state.
      
      This is fine if packets are dropped, as the steady state goes away
      as soon as a mix of traffic reappears. But, because the 256th packet
      overflowed the bucket to 0: no packets are dropped.
      
      Instead of explicitly adding an overflow check, this patch changes
      FLOW_LIMIT_HISTORY to never be able to overflow a single bucket.
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      (first item)
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5f121b9a
  4. 13 6月, 2013 6 次提交