1. 11 5月, 2015 1 次提交
  2. 27 4月, 2015 1 次提交
  3. 10 2月, 2015 1 次提交
  4. 05 2月, 2015 2 次提交
  5. 28 1月, 2015 1 次提交
  6. 11 11月, 2014 1 次提交
  7. 01 11月, 2014 1 次提交
  8. 07 10月, 2014 2 次提交
  9. 30 9月, 2014 1 次提交
    • A
      bonding: make global bonding stats more reliable · 5f0c5f73
      Andy Gospodarek 提交于
      As the code stands today, bonding stats are based simply on the stats
      from the member interfaces.  If a member was to be removed from a bond,
      the stats would instantly drop.  This would be confusing to an admin
      would would suddonly see interface stats drop while traffic is still
      flowing.
      
      In addition to preventing the stats drops mentioned above, new members
      will now be added to the bond and only traffic received after the member
      was added to the bond will be counted as part of bonding stats.  Bonding
      counters will also be updated when any slaves are dropped to make sure
      the reported stats are reliable.
      
      v2: Changes suggested by Nik to properly allocate/free stats memory.
      v3: Properly destroy workqueue and fix netlink configuration path.
      v4: Moved cached stats into bonding and slave structs as there does not
      seem to be a complexity/performance benefit to using alloc'd memory vs
      in-struct memory.
      Signed-off-by: NAndy Gospodarek <gospo@cumulusnetworks.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5f0c5f73
  10. 16 9月, 2014 1 次提交
  11. 14 9月, 2014 3 次提交
  12. 10 9月, 2014 2 次提交
  13. 21 7月, 2014 1 次提交
  14. 17 7月, 2014 1 次提交
    • M
      bonding: Do not try to send packets over dead link in TLB mode. · 6b794c1c
      Mahesh Bandewar 提交于
      In TLB mode if tlb_dynamic_lb is NOT set, slaves from the bond
      group are selected based on the hash distribution. This does not
      exclude dead links which are part of the bond. Also if there is a
      temporary link event which brings down the interface, packets
      hashed on that interface would be dropped too.
      
      This patch fixes these issues and distributes flows across the
      UP links only. Also the array construction of links which are
      capable of sending packets happen in the control path leaving
      only link-selection during the data-path.
      
      One possible side effect of this is - at a link event; all
      flows will be shuffled to get good distribution. But impact of
      this should be minimum with the assumption that a member or
      members of the bond group are not available is a very temporary
      situation.
      Signed-off-by: NMahesh Bandewar <maheshb@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6b794c1c
  15. 16 7月, 2014 3 次提交
  16. 05 6月, 2014 1 次提交
    • V
      bonding: Support macvlans on top of tlb/rlb mode bonds · 14af9963
      Vlad Yasevich 提交于
      To make TLB mode work, the patch allows learning packets
      to be sent using mac addresses assigned to macvlan devices,
      also taking into an account vlans that may be between the
      bond and macvlan device.
      
      To make RLB work, all we have to do is accept ARP packets
      for addresses added to the bond dev->uc list.  Since RLB
      mode will take care to update the peers directly with
      correct mac addresses, learning packets for these addresses
      do not have be send to switch.
      Signed-off-by: NVlad Yasevich <vyasevic@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      14af9963
  17. 23 5月, 2014 1 次提交
  18. 17 5月, 2014 10 次提交
  19. 15 5月, 2014 1 次提交
  20. 09 5月, 2014 3 次提交
  21. 25 4月, 2014 2 次提交
    • M
      bonding: Add tlb_dynamic_lb parameter for tlb mode · e9f0fb88
      Mahesh Bandewar 提交于
      The aggresive load balancing causes packet re-ordering as active
      flows are moved from a slave to another within the group. Sometime
      this aggresive lb is not necessary if the preference is for less
      re-ordering. This parameter if used with value "0" disables
      this dynamic flow shuffling minimizing packet re-ordering. Of course
      the side effect is that it has to live with the static load balancing
      that the hashing distribution provides. This impact is less severe if
      the correct xmit-hashing-policy is used for the tlb setup.
      
      The default value of the parameter is set to "1" mimicing the earlier
      behavior.
      
      Ran the netperf test with 200 stream for 1 min between two hosts with
      4x1G trunk (xmit-lb mode with xmit-policy L3+4) before and after these
      changes. Following was the command used for those 200 instances -
      
          netperf -t TCP_RR -l 60 -s 5 -H <host> -- -r81920,81920
      
      Transactions per second:
          Before change: 1,367.11
          After  change: 1,470.65
      
      Change-Id: Ie3f75c77282cf602e83a6e833c6eb164e72a0990
      Signed-off-by: NMahesh Bandewar <maheshb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e9f0fb88
    • M
      bonding: Changed hashing function to just provide hash · ee62e868
      Mahesh Bandewar 提交于
      Modified the hash function to return just hash separating from the
      modulo operation that can be performed by the caller. This is to
      make way for the tlb mode to use the same hashing policies that
      are used in the 802.3ad and Xor mode.
      
      Change-Id: I276609e87e0ca213c4d1b17b79c5e0b0f3d0dd6f
      Signed-off-by: NMahesh Bandewar <maheshb@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ee62e868