1. 14 12月, 2013 7 次提交
  2. 12 12月, 2013 1 次提交
  3. 29 11月, 2013 1 次提交
  4. 16 11月, 2013 1 次提交
  5. 08 11月, 2013 1 次提交
  6. 20 10月, 2013 4 次提交
  7. 18 10月, 2013 1 次提交
  8. 04 10月, 2013 1 次提交
    • N
      bonding: modify the old and add new xmit hash policies · 32819dc1
      Nikolay Aleksandrov 提交于
      This patch adds two new hash policy modes which use skb_flow_dissect:
      3 - Encapsulated layer 2+3
      4 - Encapsulated layer 3+4
      There should be a good improvement for tunnel users in those modes.
      It also changes the old hash functions to:
      hash ^= (__force u32)flow.dst ^ (__force u32)flow.src;
      hash ^= (hash >> 16);
      hash ^= (hash >> 8);
      
      Where hash will be initialized either to L2 hash, that is
      SRCMAC[5] XOR DSTMAC[5], or to flow->ports which should be extracted
      from the upper layer. Flow's dst and src are also extracted based on the
      xmit policy either directly from the buffer or by using skb_flow_dissect,
      but in both cases if the protocol is IPv6 then dst and src are obtained by
      ipv6_addr_hash() on the real addresses. In case of a non-dissectable
      packet, the algorithms fall back to L2 hashing.
      The bond_set_mode_ops() function is now obsolete and thus deleted
      because it was used only to set the proper hash policy. Also we trim a
      pointer from struct bonding because we no longer need to keep the hash
      function, now there's only a single hash function - bond_xmit_hash that
      works based on bond->params.xmit_policy.
      
      The hash function and skb_flow_dissect were suggested by Eric Dumazet.
      The layer names were suggested by Andy Gospodarek, because I suck at
      semantics.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      32819dc1
  9. 01 10月, 2013 1 次提交
  10. 29 9月, 2013 1 次提交
  11. 27 9月, 2013 12 次提交
  12. 16 9月, 2013 1 次提交
  13. 12 9月, 2013 1 次提交
  14. 04 9月, 2013 1 次提交
    • V
      bonding: remove bond_vlan_used() · 6f477d42
      Veaceslav Falico 提交于
      We're using it currently to verify if we have vlans before getting the tag
      from the skb we're about to send. It's useless because the vlan_get_tag()
      verifies if the skb has the tag (and returns an error if not), and we can
      receive tagged skbs only if we *already* have vlans.
      
      Plus, the current RCUed implementation is kind of useless anyway - the we
      can remove the last vlan in the moment we return from the function.
      
      So remove the only usage of it and the whole function.
      
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6f477d42
  15. 30 8月, 2013 2 次提交
  16. 02 8月, 2013 3 次提交
    • N
      bonding: initial RCU conversion · 278b2083
      nikolay@redhat.com 提交于
      This patch does the initial bonding conversion to RCU. After it the
      following modes are protected by RCU alone: roundrobin, active-backup,
      broadcast and xor. Modes ALB/TLB and 3ad still acquire bond->lock for
      reading, and will be dealt with later. curr_active_slave needs to be
      dereferenced via rcu in the converted modes because the only thing
      protecting the slave after this patch is rcu_read_lock, so we need the
      proper barrier for weakly ordered archs and to make sure we don't have
      stale pointer. It's not tagged with __rcu yet because there's still work
      to be done to remove the curr_slave_lock, so sparse will complain when
      rcu_assign_pointer and rcu_dereference are used, but the alternative to use
      rcu_dereference_protected would've created much bigger code churn which is
      more difficult to test and review. That will be converted in time.
      
      1. Active-backup mode
       1.1 Perf recording while doing iperf -P 4
        - old bonding: iperf spent 0.55% in bonding, system spent 0.29% CPU
                       in bonding
        - new bonding: iperf spent 0.29% in bonding, system spent 0.15% CPU
                       in bonding
       1.2. Bandwidth measurements
        - old bonding: 16.1 gbps consistently
        - new bonding: 17.5 gbps consistently
      
      2. Round-robin mode
       2.1 Perf recording while doing iperf -P 4
        - old bonding: iperf spent 0.51% in bonding, system spent 0.24% CPU
                       in bonding
        - new bonding: iperf spent 0.16% in bonding, system spent 0.11% CPU
                       in bonding
       2.2 Bandwidth measurements
        - old bonding: 8 gbps (variable due to packet reorderings)
        - new bonding: 10 gbps (variable due to packet reorderings)
      
      Of course the latency has improved in all converted modes, and moreover
      while
      doing enslave/release (since it doesn't affect tx anymore).
      
      Also I've stress tested all modes doing enslave/release in a loop while
      transmitting traffic.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      278b2083
    • N
      bonding: factor out slave id tx code and simplify xmit paths · 15077228
      Nikolay Aleksandrov 提交于
      I factored out the tx xmit code which relies on slave id in
      bond_xmit_slave_id. It is global because later it can be used also in
      3ad mode xmit. Unnecessary obvious comments are removed. Active-backup
      mode is simplified because bond_dev_queue_xmit always consumes the skb.
      bond_xmit_xor becomes one line because of bond_xmit_slave_id.
      bond_for_each_slave_from is not used in bond_xmit_slave_id because later
      when RCU is used we can avoid important race condition by using standard
      rculist routines.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      15077228
    • N
      bonding: convert to list API and replace bond's custom list · dec1e90e
      nikolay@redhat.com 提交于
      This patch aims to remove struct bonding's first_slave and struct
      slave's next and prev pointers, and replace them with the standard Linux
      list API. The old macros are converted to list API as well and some new
      primitives are available now. The checks if there're slaves that used
      slave_cnt have been replaced by the list_empty macro.
      Also a few small style fixes, changing longest -> shortest line in local
      variable declarations, leaving an empty line before return and removing
      unnecessary brackets.
      This is the first step to gradual RCU conversion.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dec1e90e
  17. 28 6月, 2013 1 次提交
    • N
      bonding: remove unnecessary dev_addr_from_first member · 97a1e639
      nikolay@redhat.com 提交于
      In struct bonding there's a member called dev_addr_from_first which is
      used to denote when the bond dev should clone the first slave's MAC
      address but since we have netdev's addr_assign_type variable that is not
      necessary. We clone the first slave's MAC each time we have a random MAC
      set to the bond device. This has the nice side-effect of also fixing an
      inconsistency - when the MAC address of the bond dev is set after its
      creation, but prior to having slaves, it's not kept and the first slave's
      MAC is cloned. The only way to keep the MAC was to create the bond device
      with the MAC address set (e.g. through ip link). In all cases if the
      bond device is left without any slaves - its MAC gets reset to a random
      one as before.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      97a1e639