1. 01 4月, 2015 33 次提交
  2. 30 3月, 2015 7 次提交
    • J
      tipc: fix two bugs in secondary destination lookup · d482994f
      Jon Paul Maloy 提交于
      A message sent to a node after a successful name table lookup may still
      find that the destination socket has disappeared, because distribution
      of name table updates is non-atomic. If so, the message will be rejected
      back to the sender with error code TIPC_ERR_NO_PORT. If the source
      socket of the message has disappeared in the meantime, the message
      should be dropped.
      
      However, in the currrent code, the message will instead be subject to an
      unwanted tertiary lookup, because the function tipc_msg_lookup_dest()
      doesn't check if there is an error code present in the message before
      performing the lookup. In the worst case, the message may now find the
      old destination again, and be redirected once more, instead of being
      dropped directly as it should be.
      
      A second bug in this function is that the "prev_node" field in the message
      is not updated after successful lookup, something that may have
      unpredictable consequences.
      
      The problems arising from those bugs occur very infrequently.
      
      The third change in this function; the test on msg_reroute_msg_cnt() is
      purely cosmetic, reflecting that the returned value never can be negative.
      
      This commit corrects the two bugs described above.
      Signed-off-by: NJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d482994f
    • D
      Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue · 4d92a3e9
      David S. Miller 提交于
      Jeff Kirsher says:
      
      ====================
      Intel Wired LAN Driver Updates 2015-03-27
      
      This series contains updates to i40e and i40evf.
      
      Jesse adds new device IDs to handle the new 20G speed for KR2.
      
      Mitch provides a fix for an issue that shows up as a panic or memory
      corruption when the device is brought down while under heavy stress.
      This is resolved by delaying the releasing of resources until we
      receive acknowledgment from the PF driver that the rings have indeed
      been stopped.  Also adds firmware version information to ethtool
      reporting to align with ixgbevf behavior.
      
      Akeem increases the polling loop limiter, sine we found that in
      certain circumstances the firmware can take longer to be ready after
      a reset.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4d92a3e9
    • D
      Merge branch 'stacked_vlan_tso' · afb0bc97
      David S. Miller 提交于
      Toshiaki Makita says:
      
      ====================
      Stacked vlan TSO
      
      On the basis of Netdev 0.1 discussion[1], I made a patch set to enable
      TSO for packets with multiple vlans.
      
      Currently, packets with multiple vlans are always segmented by software,
      which is caused by that netif_skb_features() drops most feature flags
      for multiple tagged packets.
      
      To allow NICs to segment them, we need to get rid of that check from core.
      Fortunately, recently introduced ndo_features_check() can be used to
      move the check to each driver, and this patch set is based on the idea.
      
      For the initial patch set, I chose 3 drivers, bonding, team, and igb, as
      candidates to enable TSO. I tested them and confirmed they works fine
      with this change.
      
      Here are samples of performance test results. As I expected, %sys gets
      pretty lower than before.
      
      * TEST1: vlan (.1Q) on vlan (.1ad) on igb (I350)
      
      - before
      
      $ netperf -t TCP_STREAM -H 192.168.10.1 -l 60
      Recv   Send    Send
      Socket Socket  Message  Elapsed
      Size   Size    Size     Time     Throughput
      bytes  bytes   bytes    secs.    10^6bits/sec
      
       87380  16384  16384    60.02     933.72
      
      Average:        CPU     %user     %nice   %system   %iowait    %steal     %idle
      Average:        all      0.13      0.00     11.28      0.01      0.00     88.58
      
      - after
      
      $ netperf -t TCP_STREAM -H 192.168.10.1 -l 60
      Recv   Send    Send
      Socket Socket  Message  Elapsed
      Size   Size    Size     Time     Throughput
      bytes  bytes   bytes    secs.    10^6bits/sec
      
       87380  16384  16384    60.01     936.13
      
      Average:        CPU     %user     %nice   %system   %iowait    %steal     %idle
      Average:        all      0.24      0.00      4.17      0.01      0.00     95.58
      
      * TEST2: vlan (.1Q) on bridge (.1ad vlan filtering) on team on igb (I350)
      
      - before
      
      $ netperf -t TCP_STREAM -H 192.168.10.1 -l 60
      Recv   Send    Send
      Socket Socket  Message  Elapsed
      Size   Size    Size     Time     Throughput
      bytes  bytes   bytes    secs.    10^6bits/sec
      
       87380  16384  16384    60.01     936.28
      
      Average:        CPU     %user     %nice   %system   %iowait    %steal     %idle
      Average:        all      0.41      0.00     11.57      0.01      0.00     88.01
      
      - after
      
      $ netperf -t TCP_STREAM -H 192.168.10.1 -l 60
      Recv   Send    Send
      Socket Socket  Message  Elapsed
      Size   Size    Size     Time     Throughput
      bytes  bytes   bytes    secs.    10^6bits/sec
      
       87380  16384  16384    60.02     935.72
      
      Average:        CPU     %user     %nice   %system   %iowait    %steal     %idle
      Average:        all      0.14      0.00      7.66      0.01      0.00     92.19
      
      In addition to above, I tested these configurations:
      - vlan (.1Q) on vlan (1.ad) on bonding on igb (I350)
      - vlan (.1Q) on vlan (1.Q) on igb (I350)
      - vlan (.1Q) on vlan (1.Q) on team on igb (I350)
      And didn't find any problem.
      
      [1] https://netdev01.org/sessions/18
          https://netdev01.org/docs/netdev01_bof_8021ad_makita_150212.pdf
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      afb0bc97
    • T
      igb: Enable TSO for stacked vlan · 1abbc98a
      Toshiaki Makita 提交于
      As datasheets for igb (I210, I350, 82576, etc.) say, maclen can be from
      14 to 127, which is enough for reasonable number of vlan tags.
      My netperf test showed I350's TSO works pretty fine with multiple vlans.
      Signed-off-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1abbc98a
    • T
      team: Don't segment multiple tagged packets on team device · b9f4cf75
      Toshiaki Makita 提交于
      Team devices don't need to segment multiple tagged packets since their
      slaves can segment them.
      Signed-off-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b9f4cf75
    • T
      bonding: Don't segment multiple tagged packets on bonding device · 4847f049
      Toshiaki Makita 提交于
      Bonding devices don't need to segment multiple tagged packets since their
      slaves can segment them.
      Signed-off-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4847f049
    • T
      net: Introduce passthru_features_check · e38f3025
      Toshiaki Makita 提交于
      As there are a number of (especially virtual) devices that don't
      need the multiple vlan check, introduce passthru_features_check() for
      convenience.
      Signed-off-by: NToshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e38f3025