1. 09 10月, 2013 1 次提交
  2. 27 9月, 2013 5 次提交
    • V
      bonding: add bond_has_slaves() and use it · 0965a1f3
      Veaceslav Falico 提交于
      Currently we verify if we have slaves by checking if bond->slave_list is
      empty. Create a define bond_has_slaves() and use it, a bit more readable
      and easier to change in the future.
      
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0965a1f3
    • V
      bonding: rework rlb_next_rx_slave() to use bond_for_each_slave() · 6475ae4c
      Veaceslav Falico 提交于
      Currently, we're using bond_for_each_slave_from(), which is really hard to
      implement under RCU and/or neighbour list.
      
      Remove it and use bond_for_each_slave() instead, taking care of the last
      used slave.
      
      Also, rename next_rx_slave to rx_slave and store the current (last)
      rx_slave.
      
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6475ae4c
    • V
      bonding: make bond_for_each_slave() use lower neighbour's private · 9caff1e7
      Veaceslav Falico 提交于
      It needs a list_head *iter, so add it wherever needed. Use both non-rcu and
      rcu variants.
      
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      CC: Dimitris Michailidis <dm@chelsio.com>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9caff1e7
    • V
      bonding: remove bond_for_each_slave_continue_reverse() · 81f23b13
      Veaceslav Falico 提交于
      We only use it in rollback scenarios and can easily use the standart
      bond_for_each_dev() instead.
      
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      81f23b13
    • V
      net: add adj_list to save only neighbours · 2f268f12
      Veaceslav Falico 提交于
      Currently, we distinguish neighbours (first-level linked devices) from
      non-neighbours by the neighbour bool in the netdev_adjacent. This could be
      quite time-consuming in case we would like to traverse *only* through
      neighbours - cause we'd have to traverse through all devices and check for
      this flag, and in a (quite common) scenario where we have lots of vlans on
      top of bridge, which is on top of a bond - the bonding would have to go
      through all those vlans to get its upper neighbour linked devices.
      
      This situation is really unpleasant, cause there are already a lot of cases
      when a device with slaves needs to go through them in hot path.
      
      To fix this, introduce a new upper/lower device lists structure -
      adj_list, which contains only the neighbours. It works always in
      pair with the all_adj_list structure (renamed from upper/lower_dev_list),
      i.e. both of them contain the same links, only that all_adj_list contains
      also non-neighbour device links. It's really a small change visible,
      currently, only for __netdev_adjacent_dev_insert/remove(), and doesn't
      change the main linked logic at all.
      
      Also, add some comments a fix a name collision in
      netdev_for_each_upper_dev_rcu() and rework the naming by the following
      rules:
      
      netdev_(all_)(upper|lower)_*
      
      If "all_" is present, then we work with the whole list of upper/lower
      devices, otherwise - only with direct neighbours. Uninline functions - to
      get better stack traces.
      
      CC: "David S. Miller" <davem@davemloft.net>
      CC: Eric Dumazet <edumazet@google.com>
      CC: Jiri Pirko <jiri@resnulli.us>
      CC: Alexander Duyck <alexander.h.duyck@intel.com>
      CC: Cong Wang <amwang@redhat.com>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2f268f12
  3. 16 9月, 2013 1 次提交
  4. 04 9月, 2013 2 次提交
  5. 30 8月, 2013 3 次提交
    • V
      bonding: remove vlan_list/current_alb_vlan · e868b0c9
      Veaceslav Falico 提交于
      Currently there are no real users of vlan_list/current_alb_vlan, only the
      helpers which maintain them, so remove them.
      
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e868b0c9
    • V
      bonding: make alb_send_learning_packets() use upper dev list · 5bf94b83
      Veaceslav Falico 提交于
      Currently, if there are vlans on top of bond, alb_send_learning_packets()
      will never send LPs from the bond itself (i.e. untagged), which might leave
      untagged clients unupdated.
      
      Also, the 'circular vlan' logic (i.e. update only MAX_LP_BURST vlans at a
      time, and save the last vlan for the next update) is really suboptimal - in
      case of lots of vlans it will take a lot of time to update every vlan. It
      is also never called in any hot path and sends only a few small packets -
      thus the optimization by itself is useless.
      
      So remove the whole current_alb_vlan/MAX_LP_BURST logic from
      alb_send_learning_packets(). Instead, we'll first send a packet untagged
      and then traverse the upper dev list, sending a tagged packet for each vlan
      found. Also, remove the MAX_LP_BURST define - we already don't need it.
      
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5bf94b83
    • V
      bonding: split alb_send_learning_packets() · 7aa64981
      Veaceslav Falico 提交于
      Create alb_send_lp_vid(), which will handle the skb/lp creation, vlan
      tagging and sending, and use it in alb_send_learning_packets().
      
      This way all the logic remains in alb_send_learning_packets(), which
      becomes a lot more cleaner and easier to understand.
      
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7aa64981
  6. 02 8月, 2013 2 次提交
    • N
      bonding: initial RCU conversion · 278b2083
      nikolay@redhat.com 提交于
      This patch does the initial bonding conversion to RCU. After it the
      following modes are protected by RCU alone: roundrobin, active-backup,
      broadcast and xor. Modes ALB/TLB and 3ad still acquire bond->lock for
      reading, and will be dealt with later. curr_active_slave needs to be
      dereferenced via rcu in the converted modes because the only thing
      protecting the slave after this patch is rcu_read_lock, so we need the
      proper barrier for weakly ordered archs and to make sure we don't have
      stale pointer. It's not tagged with __rcu yet because there's still work
      to be done to remove the curr_slave_lock, so sparse will complain when
      rcu_assign_pointer and rcu_dereference are used, but the alternative to use
      rcu_dereference_protected would've created much bigger code churn which is
      more difficult to test and review. That will be converted in time.
      
      1. Active-backup mode
       1.1 Perf recording while doing iperf -P 4
        - old bonding: iperf spent 0.55% in bonding, system spent 0.29% CPU
                       in bonding
        - new bonding: iperf spent 0.29% in bonding, system spent 0.15% CPU
                       in bonding
       1.2. Bandwidth measurements
        - old bonding: 16.1 gbps consistently
        - new bonding: 17.5 gbps consistently
      
      2. Round-robin mode
       2.1 Perf recording while doing iperf -P 4
        - old bonding: iperf spent 0.51% in bonding, system spent 0.24% CPU
                       in bonding
        - new bonding: iperf spent 0.16% in bonding, system spent 0.11% CPU
                       in bonding
       2.2 Bandwidth measurements
        - old bonding: 8 gbps (variable due to packet reorderings)
        - new bonding: 10 gbps (variable due to packet reorderings)
      
      Of course the latency has improved in all converted modes, and moreover
      while
      doing enslave/release (since it doesn't affect tx anymore).
      
      Also I've stress tested all modes doing enslave/release in a loop while
      transmitting traffic.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      278b2083
    • N
      bonding: convert to list API and replace bond's custom list · dec1e90e
      nikolay@redhat.com 提交于
      This patch aims to remove struct bonding's first_slave and struct
      slave's next and prev pointers, and replace them with the standard Linux
      list API. The old macros are converted to list API as well and some new
      primitives are available now. The checks if there're slaves that used
      slave_cnt have been replaced by the list_empty macro.
      Also a few small style fixes, changing longest -> shortest line in local
      variable declarations, leaving an empty line before return and removing
      unnecessary brackets.
      This is the first step to gradual RCU conversion.
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dec1e90e
  7. 20 6月, 2013 1 次提交
  8. 18 6月, 2013 1 次提交
  9. 29 5月, 2013 1 次提交
  10. 20 4月, 2013 1 次提交
  11. 05 1月, 2013 1 次提交
  12. 01 12月, 2012 2 次提交
    • J
      bonding: delete migrated IP addresses from the rlb hash table · e53665c6
      Jiri Bohac 提交于
      Bonding in balance-alb mode records information from ARP packets
      passing through the bond in a hash table (rx_hashtbl).
      
      At certain situations (e.g. link change of a slave),
      rlb_update_rx_clients() will send out ARP packets to update ARP
      caches of other hosts on the network to achieve RX load
      balancing.
      
      The problem is that once an IP address is recorded in the hash
      table, it stays there indefinitely. If this IP address is
      migrated to a different host in the network, bonding still sends
      out ARP packets that poison other systems' ARP caches with
      invalid information.
      
      This patch solves this by looking at all incoming ARP packets,
      and checking if the source IP address is one of the source
      addresses stored in the rx_hashtbl. If it is, but the MAC
      addresses differ, the corresponding hash table entries are
      removed. Thus, when an IP address is migrated, the first ARP
      broadcast by its new owner will purge the offending entries of
      rx_hashtbl.
      
      The hash table is hashed by ip_dst. To be able to do the above
      check efficiently (not walking the whole hash table), we need a
      reverse mapping (by ip_src).
      
      I added three new members in struct rlb_client_info:
         rx_hashtbl[x].src_first will point to the start of a list of
            entries for which hash(ip_src) == x.
         The list is linked with src_next and src_prev.
      
      When an incoming ARP packet arrives at rlb_arp_recv()
      rlb_purge_src_ip() can quickly walk only the entries on the
      corresponding lists, i.e. the entries that are likely to contain
      the offending IP address.
      
      To avoid confusion, I renamed these existing fields of struct
      rlb_client_info:
      	next -> used_next
      	prev -> used_prev
      	rx_hashtbl_head -> rx_hashtbl_used_head
      
      (The current linked list is _not_ a list of hash table
      entries with colliding ip_dst. It's a list of entries that are
      being used; its purpose is to avoid walking the whole hash table
      when looking for used entries.)
      Signed-off-by: NJiri Bohac <jbohac@suse.cz>
      Signed-off-by: NJay Vosburgh <fubar@us.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e53665c6
    • Z
      bonding: rlb mode of bond should not alter ARP originating via bridge · 567b871e
      zheng.li 提交于
      Do not modify or load balance ARP packets passing through balance-alb
      mode (wherein the ARP did not originate locally, and arrived via a bridge).
      
      Modifying pass-through ARP replies causes an incorrect MAC address
      to be placed into the ARP packet, rendering peers unable to communicate
      with the actual destination from which the ARP reply originated.
      
      Load balancing pass-through ARP requests causes an entry to be
      created for the peer in the rlb table, and bond_alb_monitor will
      occasionally issue ARP updates to all peers in the table instrucing them
      as to which MAC address they should communicate with; this occurs when
      some event sets rx_ntt.  In the bridged case, however, the MAC address
      used for the update would be the MAC of the slave, not the actual source
      MAC of the originating destination.  This would render peers unable to
      communicate with the destinations beyond the bridge.
      Signed-off-by: NZheng Li <zheng.x.li@oracle.com>
      Cc: Jay Vosburgh <fubar@us.ibm.com>
      Cc: Andy Gospodarek <andy@greyhouse.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NJay Vosburgh <fubar@us.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      567b871e
  13. 14 6月, 2012 1 次提交
  14. 13 6月, 2012 1 次提交
  15. 14 5月, 2012 1 次提交
  16. 11 5月, 2012 1 次提交
    • J
      net, drivers/net: Convert compare_ether_addr_64bits to ether_addr_equal_64bits · a6700db1
      Joe Perches 提交于
      Use the new bool function ether_addr_equal_64bits to add
      some clarity and reduce the likelihood for misuse of
      compare_ether_addr_64bits for sorting.
      
      Done via cocci script:
      
      $ cat compare_ether_addr_64bits.cocci
      @@
      expression a,b;
      @@
      -	!compare_ether_addr_64bits(a, b)
      +	ether_addr_equal_64bits(a, b)
      
      @@
      expression a,b;
      @@
      -	compare_ether_addr_64bits(a, b)
      +	!ether_addr_equal_64bits(a, b)
      
      @@
      expression a,b;
      @@
      -	!ether_addr_equal_64bits(a, b) == 0
      +	ether_addr_equal_64bits(a, b)
      
      @@
      expression a,b;
      @@
      -	!ether_addr_equal_64bits(a, b) != 0
      +	!ether_addr_equal_64bits(a, b)
      
      @@
      expression a,b;
      @@
      -	ether_addr_equal_64bits(a, b) == 0
      +	!ether_addr_equal_64bits(a, b)
      
      @@
      expression a,b;
      @@
      -	ether_addr_equal_64bits(a, b) != 0
      +	ether_addr_equal_64bits(a, b)
      
      @@
      expression a,b;
      @@
      -	!!ether_addr_equal_64bits(a, b)
      +	ether_addr_equal_64bits(a, b)
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a6700db1
  17. 01 2月, 2012 1 次提交
  18. 19 1月, 2012 1 次提交
    • J
      bonding: fix enslaving in alb mode when link down · b924551b
      Jiri Bohac 提交于
      bond_alb_init_slave() is called from bond_enslave() and sets the slave's MAC
      address. This is done differently for TLB and ALB modes.
      bond->alb_info.rlb_enabled is used to discriminate between the two modes but
      this flag may be uninitialized if the slave is being enslaved prior to calling
      bond_open() -> bond_alb_initialize() on the master.
      
      It turns out all the callers of alb_set_slave_mac_addr() pass
      bond->alb_info.rlb_enabled as the hw parameter.
      
      This patch cleans up the unnecessary parameter of alb_set_slave_mac_addr() and
      makes the function decide based on the bonding mode instead, which fixes the
      above problem.
      Reported-by: NNarendra K <Narendra_K@Dell.com>
      Signed-off-by: NJiri Bohac <jbohac@suse.cz>
      Signed-off-by: NJay Vosburgh <fubar@us.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b924551b
  19. 12 1月, 2012 1 次提交
  20. 30 10月, 2011 1 次提交
    • J
      bonding: eliminate bond_close race conditions · e6d265e8
      Jay Vosburgh 提交于
      This patch resolves two sets of race conditions.
      
      	Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> reported the
      first, as follows:
      
      The bond_close() calls cancel_delayed_work() to cancel delayed works.
      It, however, cannot cancel works that were already queued in workqueue.
      The bond_open() initializes work->data, and proccess_one_work() refers
      get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when
      work->data has been initialized. Thus, a panic occurs.
      
      	He included a patch that converted the cancel_delayed_work calls
      in bond_close to flush_delayed_work_sync, which eliminated the above
      problem.
      
      	His patch is incorporated, at least in principle, into this
      patch.  In this patch, we use cancel_delayed_work_sync in place of
      flush_delayed_work_sync, and also convert bond_uninit in addition to
      bond_close.
      
      	This conversion to _sync, however, opens new races between
      bond_close and three periodically executing workqueue functions:
      bond_mii_monitor, bond_alb_monitor and bond_activebackup_arp_mon.
      
      	The race occurs because bond_close and bond_uninit are always
      called with RTNL held, and these workqueue functions may acquire RTNL to
      perform failover-related activities.  If bond_close or bond_uninit is
      waiting in cancel_delayed_work_sync, deadlock occurs.
      
      	These deadlocks are resolved by having the workqueue functions
      acquire RTNL conditionally.  If the rtnl_trylock() fails, the functions
      reschedule and return immediately.  For the cases that are attempting to
      perform link failover, a delay of 1 is used; for the other cases, the
      normal interval is used (as those activities are not as time critical).
      
      	Additionally, the bond_mii_monitor function now stores the delay
      in a variable (mimicing the structure of activebackup_arp_mon).
      
      	Lastly, all of the above renders the kill_timers sentinel moot,
      and therefore it has been removed.
      Tested-by: NMitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com>
      Signed-off-by: NJay Vosburgh <fubar@us.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e6d265e8
  21. 04 10月, 2011 1 次提交
    • A
      bonding: properly stop queuing work when requested · a0db2dad
      Andy Gospodarek 提交于
      During a test where a pair of bonding interfaces using ARP monitoring
      were both brought up and torn down (with an rmmod) repeatedly, a panic
      in the timer code was noticed.  I tracked this down and determined that
      any of the bonding functions that ran as workqueue handlers and requeued
      more work might not properly exit when the module was removed.
      
      There was a flag protected by the bond lock called kill_timers that is
      set when the interface goes down or the module is removed, but many of
      the functions that monitor link status now unlock the bond lock to take
      rtnl first.  There is a chance that another CPU running the rmmod could
      get the lock and set kill_timers after the first check has passed.
      
      This patch does not allow any function to queue work that will make
      itself run unless kill_timers is not set.  I also noticed while doing
      this work that bond_resend_igmp_join_requests did not have a check for
      kill_timers, so I added the needed call there as well.
      Signed-off-by: NAndy Gospodarek <andy@greyhouse.net>
      Reported-by: NLiang Zheng <lzheng@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a0db2dad
  22. 22 7月, 2011 1 次提交
  23. 26 5月, 2011 1 次提交
    • N
      bonding: prevent deadlock on slave store with alb mode (v3) · 9fe0617d
      Neil Horman 提交于
      This soft lockup was recently reported:
      
      [root@dell-per715-01 ~]# echo +bond5 > /sys/class/net/bonding_masters
      [root@dell-per715-01 ~]# echo +eth1 > /sys/class/net/bond5/bonding/slaves
      bonding: bond5: doing slave updates when interface is down.
      bonding bond5: master_dev is not up in bond_enslave
      [root@dell-per715-01 ~]# echo -eth1 > /sys/class/net/bond5/bonding/slaves
      bonding: bond5: doing slave updates when interface is down.
      
      BUG: soft lockup - CPU#12 stuck for 60s! [bash:6444]
      CPU 12:
      Modules linked in: bonding autofs4 hidp rfcomm l2cap bluetooth lockd sunrpc
      be2d
      Pid: 6444, comm: bash Not tainted 2.6.18-262.el5 #1
      RIP: 0010:[<ffffffff80064bf0>]  [<ffffffff80064bf0>]
      .text.lock.spinlock+0x26/00
      RSP: 0018:ffff810113167da8  EFLAGS: 00000286
      RAX: ffff810113167fd8 RBX: ffff810123a47800 RCX: 0000000000ff1025
      RDX: 0000000000000000 RSI: ffff810123a47800 RDI: ffff81021b57f6f8
      RBP: ffff81021b57f500 R08: 0000000000000000 R09: 000000000000000c
      R10: 00000000ffffffff R11: ffff81011d41c000 R12: ffff81021b57f000
      R13: 0000000000000000 R14: 0000000000000282 R15: 0000000000000282
      FS:  00002b3b41ef3f50(0000) GS:ffff810123b27940(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 00002b3b456dd000 CR3: 000000031fc60000 CR4: 00000000000006e0
      
      Call Trace:
       [<ffffffff80064af9>] _spin_lock_bh+0x9/0x14
       [<ffffffff886937d7>] :bonding:tlb_clear_slave+0x22/0xa1
       [<ffffffff8869423c>] :bonding:bond_alb_deinit_slave+0xba/0xf0
       [<ffffffff8868dda6>] :bonding:bond_release+0x1b4/0x450
       [<ffffffff8006457b>] __down_write_nested+0x12/0x92
       [<ffffffff88696ae4>] :bonding:bonding_store_slaves+0x25c/0x2f7
       [<ffffffff801106f7>] sysfs_write_file+0xb9/0xe8
       [<ffffffff80016b87>] vfs_write+0xce/0x174
       [<ffffffff80017450>] sys_write+0x45/0x6e
       [<ffffffff8005d28d>] tracesys+0xd5/0xe0
      
      It occurs because we are able to change the slave configuarion of a bond while
      the bond interface is down.  The bonding driver initializes some data structures
      only after its ndo_open routine is called.  Among them is the initalization of
      the alb tx and rx hash locks.  So if we add or remove a slave without first
      opening the bond master device, we run the risk of trying to lock/unlock a
      spinlock that has garbage for data in it, which results in our above softlock.
      
      Note that sometimes this works, because in many cases an unlocked spinlock has
      the raw_lock parameter initialized to zero (meaning that the kzalloc of the
      net_device private data is equivalent to calling spin_lock_init), but thats not
      true in all cases, and we aren't guaranteed that condition, so we need to pass
      the relevant spinlocks through the spin_lock_init function.
      
      Fix it by moving the spin_lock_init calls for the tx and rx hashtable locks to
      the ndo_init path, so they are ready for use by the bond_store_slaves path.
      
      Change notes:
      v2) Based on conversation with Jay and Nicolas it seems that the ability to
      enslave devices while the bond master is down should be safe to do.  As such
      this is an outlier bug, and so instead we'll just initalize the errant spinlocks
      in the init path rather than the open path, solving the problem.  We'll also
      remove the warnings about the bond being down during enslave operations, since
      it should be safe
      
      v3) Fix spelling error
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Reported-by: jtluka@redhat.com
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      CC: nicolas.2p.debian@gmail.com
      CC: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NJay Vosburgh <fubar@us.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9fe0617d
  24. 10 5月, 2011 1 次提交
  25. 26 4月, 2011 1 次提交
    • J
      bonding: move processing of recv handlers into handle_frame() · 3aba891d
      Jiri Pirko 提交于
      Since now when bonding uses rx_handler, all traffic going into bond
      device goes thru bond_handle_frame. So there's no need to go back into
      bonding code later via ptype handlers. This patch converts
      original ptype handlers into "bonding receive probes". These functions
      are called from bond_handle_frame and they are registered per-mode.
      
      Note that vlan packets are also handled because they are always untagged
      thanks to vlan_untag()
      
      Note that this also allows arpmon for eth-bond-bridge-vlan topology.
      Signed-off-by: NJiri Pirko <jpirko@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3aba891d
  26. 12 4月, 2011 2 次提交
  27. 01 3月, 2011 1 次提交
  28. 21 1月, 2011 1 次提交
    • N
      bonding: Ensure that we unshare skbs prior to calling pskb_may_pull · b3053251
      Neil Horman 提交于
      Recently reported oops:
      
      kernel BUG at net/core/skbuff.c:813!
      invalid opcode: 0000 [#1] SMP
      last sysfs file: /sys/devices/virtual/net/bond0/broadcast
      CPU 8
      Modules linked in: sit tunnel4 cpufreq_ondemand acpi_cpufreq freq_table bonding
      ipv6 dm_mirror dm_region_hash dm_log cdc_ether usbnet mii serio_raw i2c_i801
      i2c_core iTCO_wdt iTCO_vendor_support shpchp ioatdma i7core_edac edac_core bnx2
      ixgbe dca mdio sg ext4 mbcache jbd2 sd_mod crc_t10dif mptsas mptscsih mptbase
      scsi_transport_sas dm_mod [last unloaded: microcode]
      
      Modules linked in: sit tunnel4 cpufreq_ondemand acpi_cpufreq freq_table bonding
      ipv6 dm_mirror dm_region_hash dm_log cdc_ether usbnet mii serio_raw i2c_i801
      i2c_core iTCO_wdt iTCO_vendor_support shpchp ioatdma i7core_edac edac_core bnx2
      ixgbe dca mdio sg ext4 mbcache jbd2 sd_mod crc_t10dif mptsas mptscsih mptbase
      scsi_transport_sas dm_mod [last unloaded: microcode]
      Pid: 0, comm: swapper Not tainted 2.6.32-71.el6.x86_64 #1 BladeCenter HS22
      -[7870AC1]-
      RIP: 0010:[<ffffffff81405b16>]  [<ffffffff81405b16>]
      pskb_expand_head+0x36/0x1e0
      RSP: 0018:ffff880028303b70  EFLAGS: 00010202
      RAX: 0000000000000002 RBX: ffff880c6458ec80 RCX: 0000000000000020
      RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff880c6458ec80
      RBP: ffff880028303bc0 R08: ffffffff818a6180 R09: ffff880c6458ed64
      R10: ffff880c622b36c0 R11: 0000000000000400 R12: 0000000000000000
      R13: 0000000000000180 R14: ffff880c622b3000 R15: 0000000000000000
      FS:  0000000000000000(0000) GS:ffff880028300000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
      CR2: 00000038653452a4 CR3: 0000000001001000 CR4: 00000000000006e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Process swapper (pid: 0, threadinfo ffff8806649c2000, task ffff880c64f16ab0)
      Stack:
       ffff880028303bc0 ffffffff8104fff9 000000000000001c 0000000100000000
      <0> ffff880000047d80 ffff880c6458ec80 000000000000001c ffff880c6223da00
      <0> ffff880c622b3000 0000000000000000 ffff880028303c10 ffffffff81407f7a
      Call Trace:
      <IRQ>
       [<ffffffff8104fff9>] ? __wake_up_common+0x59/0x90
       [<ffffffff81407f7a>] __pskb_pull_tail+0x2aa/0x360
       [<ffffffffa0244530>] bond_arp_rcv+0x2c0/0x2e0 [bonding]
       [<ffffffff814a0857>] ? packet_rcv+0x377/0x440
       [<ffffffff8140f21b>] netif_receive_skb+0x2db/0x670
       [<ffffffff8140f788>] napi_skb_finish+0x58/0x70
       [<ffffffff8140fc89>] napi_gro_receive+0x39/0x50
       [<ffffffffa01286eb>] ixgbe_clean_rx_irq+0x35b/0x900 [ixgbe]
       [<ffffffffa01290f6>] ixgbe_clean_rxtx_many+0x136/0x240 [ixgbe]
       [<ffffffff8140fe53>] net_rx_action+0x103/0x210
       [<ffffffff81073bd7>] __do_softirq+0xb7/0x1e0
       [<ffffffff810d8740>] ? handle_IRQ_event+0x60/0x170
       [<ffffffff810142cc>] call_softirq+0x1c/0x30
       [<ffffffff81015f35>] do_softirq+0x65/0xa0
       [<ffffffff810739d5>] irq_exit+0x85/0x90
       [<ffffffff814cf915>] do_IRQ+0x75/0xf0
       [<ffffffff81013ad3>] ret_from_intr+0x0/0x11
       <EOI>
       [<ffffffff8101bc01>] ? mwait_idle+0x71/0xd0
       [<ffffffff814cd80a>] ? atomic_notifier_call_chain+0x1a/0x20
       [<ffffffff81011e96>] cpu_idle+0xb6/0x110
       [<ffffffff814c17c8>] start_secondary+0x1fc/0x23f
      
      Resulted from bonding driver registering packet handlers via dev_add_pack and
      then trying to call pskb_may_pull. If another packet handler (like for AF_PACKET
      sockets) gets called first, the delivered skb will have a user count > 1, which
      causes pskb_may_pull to BUG halt when it does its skb_shared check.  Fix this by
      calling skb_share_check prior to the may_pull call sites in the bonding driver
      to clone the skb when needed.  Tested by myself and the reported successfully.
      
      Signed-off-by: Neil Horman
      CC: Andy Gospodarek <andy@greyhouse.net>
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NJay Vosburgh <fubar@us.ibm.com>
      Signed-off-by: NAndy Gospodarek <andy@greyhouse.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b3053251
  29. 17 12月, 2010 1 次提交
  30. 15 9月, 2010 1 次提交
    • A
      bonding: correctly process non-linear skbs · ab12811c
      Andy Gospodarek 提交于
      It was recently brought to my attention that 802.3ad mode bonds would no
      longer form when using some network hardware after a driver update.
      After snooping around I realized that the particular hardware was using
      page-based skbs and found that skb->data did not contain a valid LACPDU
      as it was not stored there.  That explained the inability to form an
      802.3ad-based bond.  For balance-alb mode bonds this was also an issue
      as ARPs would not be properly processed.
      
      This patch fixes the issue in my tests and should be applied to 2.6.36
      and as far back as anyone cares to add it to stable.
      
      Thanks to Alexander Duyck <alexander.h.duyck@intel.com> and Jesse
      Brandeburg <jesse.brandeburg@intel.com> for the suggestions on this one.
      Signed-off-by: NAndy Gospodarek <andy@greyhouse.net>
      CC: Alexander Duyck <alexander.h.duyck@intel.com>
      CC: Jesse Brandeburg <jesse.brandeburg@intel.com>
      CC: stable@kerne.org
      Signed-off-by: NJay Vosburgh <fubar@us.ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ab12811c