- 28 10月, 2013 2 次提交
-
-
由 David S. Miller 提交于
This reverts commit 4d961a10, reversing changes made to a00f6fcc. Revert bond locking changes, they cause regressions and Veaceslav Falico doesn't like how the commit messages were done at all. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 dingtianhong 提交于
The bond slave list may change when the monitor is running, the slave list is no longer protected by bond->lock, only protected by rtnl lock(), so we have 3 ways to modify it: 1.add bond_master_upper_dev_link() and bond_upper_dev_unlink() in bond->lock, but it is unsafe to call call_netdevice_notifiers() in write lock. 2.remove unused bond->lock for monitor function, only use the existing rtnl lock(). 3.use rcu_read_lock() to protect it, of course, it will transform bond_for_each_slave to bond_for_each_slave_rcu() and performance is better, but in slow path, it is ignored. so I remove the bond->lock and move the rtnl lock to protect the whole monitor function. Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 10月, 2013 1 次提交
-
-
由 dingtianhong 提交于
The commit 278b2083 (bonding: initial RCU conversion) has convert the roundrobin, active-backup, broadcast and xor xmit path to rcu protection, the performance will be better for these mode, so this time, convert xmit path for 3ad mode. Suggested-by: NNikolay Aleksandrov <nikolay@redhat.com> Suggested-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDing Tianhong <dingtianhong@huawei.com> Signed-off-by: NWang Yufen <wangyufen@huawei.com> Cc: Nikolay Aleksandrov <nikolay@redhat.com> Cc: Veaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 10月, 2013 1 次提交
-
-
由 Nikolay Aleksandrov 提交于
This patch adds two new hash policy modes which use skb_flow_dissect: 3 - Encapsulated layer 2+3 4 - Encapsulated layer 3+4 There should be a good improvement for tunnel users in those modes. It also changes the old hash functions to: hash ^= (__force u32)flow.dst ^ (__force u32)flow.src; hash ^= (hash >> 16); hash ^= (hash >> 8); Where hash will be initialized either to L2 hash, that is SRCMAC[5] XOR DSTMAC[5], or to flow->ports which should be extracted from the upper layer. Flow's dst and src are also extracted based on the xmit policy either directly from the buffer or by using skb_flow_dissect, but in both cases if the protocol is IPv6 then dst and src are obtained by ipv6_addr_hash() on the real addresses. In case of a non-dissectable packet, the algorithms fall back to L2 hashing. The bond_set_mode_ops() function is now obsolete and thus deleted because it was used only to set the proper hash policy. Also we trim a pointer from struct bonding because we no longer need to keep the hash function, now there's only a single hash function - bond_xmit_hash that works based on bond->params.xmit_policy. The hash function and skb_flow_dissect were suggested by Eric Dumazet. The layer names were suggested by Andy Gospodarek, because I suck at semantics. Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Acked-by: NEric Dumazet <edumazet@google.com> Acked-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 9月, 2013 8 次提交
-
-
由 Veaceslav Falico 提交于
It has no users, so it's safe to remove it completely. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Veaceslav Falico 提交于
Convert all instances of for (agg = __get_first_agg(); agg; agg = __get_next_port) to the standard bond_for_each_slave(). CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Veaceslav Falico 提交于
Convert all instances of for (agg = __get_first_agg(); agg; agg = __get_next_port) to the standard bond_for_each_slave(). Also, remove the useless checks before calling bond_3ad_set_carrier() - if we have something NULL - it would fire long ago, in __get_first/next_port(), per example. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Veaceslav Falico 提交于
Currently we're relying on suboptimal construct for (; aggregator; aggregator = __get_next_agg(aggregator)) { where aggregator is an argument of __get_active_agg() which is _always_ the first slave's aggregator - judging by all the callers, comments in the ad_agg_selection_logic() and by logic. Convert it to use the standard bond_for_each_slave(). CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Veaceslav Falico 提交于
Currently, ad_port_selection_logic() uses for (aggregator = __get_first_agg(port); aggregator; aggregator = __get_next_agg(aggregator)) { construct, however it's suboptimal, difficult to read and understand. Change it to a standard bond_for_each_slave(), so that we won't need __get_first/next_agg() and have it more readable. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Veaceslav Falico 提交于
Currently we have only one user of it, so it's kind of useless and just obfusicates things. Remove it and move the logic to the only user - bond_3ad_state_machine_handler(). CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Veaceslav Falico 提交于
Currently this function is only used in constructs like for (port = __get_first_port(bond); port; port = __get_next_port(port)) which is basicly the same as bond_for_each_slave(bond, slave, iter) { port = &(SLAVE_AD_INFO(slave).port); but a more time consuming. Remove the function and convert the users to bond_for_each_slave(). CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Veaceslav Falico 提交于
After commit 1f718f0f ("bonding: populate neighbour's private on enslave"), we've moved the unlinking of the slave to the earliest position possible - so that nobody will see an half-uninited slave. However, bond_3ad_unbind_slave() relied that, even while removing the last slave, it is still accessible - via __get_first_agg() (and, eventually, bond_first_slave()). Fix that by verifying if the aggregator return is an actual aggregator, but not NULL. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 9月, 2013 3 次提交
-
-
由 Veaceslav Falico 提交于
Currently we verify if we have slaves by checking if bond->slave_list is empty. Create a define bond_has_slaves() and use it, a bit more readable and easier to change in the future. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Veaceslav Falico 提交于
Currently, there are two loops - first we find the first slave in an aggregator after the xmit_hash_policy() returned number, and after that we loop from that slave, over bonding head, and till that slave to find any suitable slave to send the packet through. Replace it by just one bond_for_each_slave() loop, which first loops through the requested number of slaves, saving the first suitable one, and after that we've hit the requested number of slaves to skip - search for any up slave to send the packet through. If we don't find such kind of slave - then just send the packet through the first suitable slave found. Logic remains unchainged, and we skip two loops. Also, refactor it a bit for readability. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Veaceslav Falico 提交于
It needs a list_head *iter, so add it wherever needed. Use both non-rcu and rcu variants. CC: Jay Vosburgh <fubar@us.ibm.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: Dimitris Michailidis <dm@chelsio.com> Signed-off-by: NVeaceslav Falico <vfalico@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 9月, 2013 1 次提交
-
-
由 nikolay@redhat.com 提交于
We can drop the use of bond->lock for mutual exclusion in bond_3ad_update_lacp_rate and use RTNL in the sysfs store function instead. This way we'll prevent races with mode change and interface up/down as well as simplify update_lacp_rate by removing the check for port->slave because it'll always be initialized (done while enslaving with RTNL). This change will also help in the future removal of reader bond->lock from bond_enslave. Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 8月, 2013 2 次提交
-
-
由 nikolay@redhat.com 提交于
This patch does the initial bonding conversion to RCU. After it the following modes are protected by RCU alone: roundrobin, active-backup, broadcast and xor. Modes ALB/TLB and 3ad still acquire bond->lock for reading, and will be dealt with later. curr_active_slave needs to be dereferenced via rcu in the converted modes because the only thing protecting the slave after this patch is rcu_read_lock, so we need the proper barrier for weakly ordered archs and to make sure we don't have stale pointer. It's not tagged with __rcu yet because there's still work to be done to remove the curr_slave_lock, so sparse will complain when rcu_assign_pointer and rcu_dereference are used, but the alternative to use rcu_dereference_protected would've created much bigger code churn which is more difficult to test and review. That will be converted in time. 1. Active-backup mode 1.1 Perf recording while doing iperf -P 4 - old bonding: iperf spent 0.55% in bonding, system spent 0.29% CPU in bonding - new bonding: iperf spent 0.29% in bonding, system spent 0.15% CPU in bonding 1.2. Bandwidth measurements - old bonding: 16.1 gbps consistently - new bonding: 17.5 gbps consistently 2. Round-robin mode 2.1 Perf recording while doing iperf -P 4 - old bonding: iperf spent 0.51% in bonding, system spent 0.24% CPU in bonding - new bonding: iperf spent 0.16% in bonding, system spent 0.11% CPU in bonding 2.2 Bandwidth measurements - old bonding: 8 gbps (variable due to packet reorderings) - new bonding: 10 gbps (variable due to packet reorderings) Of course the latency has improved in all converted modes, and moreover while doing enslave/release (since it doesn't affect tx anymore). Also I've stress tested all modes doing enslave/release in a loop while transmitting traffic. Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 nikolay@redhat.com 提交于
This patch aims to remove struct bonding's first_slave and struct slave's next and prev pointers, and replace them with the standard Linux list API. The old macros are converted to list API as well and some new primitives are available now. The checks if there're slaves that used slave_cnt have been replaced by the list_empty macro. Also a few small style fixes, changing longest -> shortest line in local variable declarations, leaving an empty line before return and removing unnecessary brackets. This is the first step to gradual RCU conversion. Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 5月, 2013 1 次提交
-
-
由 nikolay@redhat.com 提交于
When bond_3ad_get_active_agg_info() is used in all show_ad_ functions it is not protected against slave manipulation and since it walks over the slaves and uses them, this can easily result in NULL pointer dereference or use of freed memory. Both the new wrapper and the internal function are exported to the bonding as they're needed in different places. Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 19 2月, 2013 2 次提交
-
-
由 nikolay@redhat.com 提交于
The 3ad machine state spinlock can be used before it is inititialized while doing bond_enslave() (and the port is being initialized) since port->slave is set before the lock is prepared, thus causing soft lock-ups and a multitude of other nasty bugs. [ Rename __initialize_port_locks() variable name to 'slave' -DaveM ] Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 nikolay@redhat.com 提交于
port->slave can be NULL since it's being initialized in bond_enslave thus dereferencing a NULL pointer in bond_3ad_update_lacp_rate() Also fix a minor bug, which could cause a port not to have AD_STATE_LACP_TIMEOUT since there's no sync between bond_3ad_update_lacp_rate() and bond_3ad_bind_slave(), by changing the read_lock to a write_lock_bh in bond_3ad_update_lacp_rate(). Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com> Signed-off-by: NJay Vosburgh <fubar@us.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 05 1月, 2013 1 次提交
-
-
由 Jiri Pirko 提交于
Benefit from new upper dev list and free bonding from dev->master usage. Signed-off-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 6月, 2012 1 次提交
-
-
由 Eric Dumazet 提交于
When packets are dropped in TX path, its better to use kfree_skb() instead of dev_kfree_skb() to give proper drop_monitor events. Also move the kfree_skb() call after read_unlock() in bond_alb_xmit() and bond_xmit_activebackup() Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 6月, 2012 1 次提交
-
-
由 Eric Dumazet 提交于
Cloning all packets in input path have a significant cost. Use skb_header_pointer()/skb_copy_bits() instead of pskb_may_pull() so that recv_probe handlers (bond_3ad_lacpdu_recv / bond_arp_rcv / rlb_arp_recv ) dont touch input skb. bond_handle_frame() can avoid the skb_clone()/dev_kfree_skb() Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Jay Vosburgh <fubar@us.ibm.com> Cc: Andy Gospodarek <andy@greyhouse.net> Cc: Jiri Bohac <jbohac@suse.cz> Cc: Nicolas de Pesloüan <nicolas.2p.debian@free.fr> Cc: Maciej Żenczykowski <maze@google.com> Signed-off-by: NJay Vosburgh <fubar@us.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 5月, 2012 1 次提交
-
-
由 Jiri Bohac 提交于
Since commit 3aba891d, bonding processes LACP frames (802.3ad mode) with bond_handle_frame(). Currently a copy of the skb is made and the original is left to be processed by other rx_handlers and the rest of the network stack by returning RX_HANDLER_ANOTHER. As there is no protocol handler for PKT_TYPE_LACPDU, the frame is dropped and dev->rx_dropped increased. Fix this by making bond_handle_frame() return RX_HANDLER_CONSUMED if bonding has processed the LACP frame. Signed-off-by: NJiri Bohac <jbohac@suse.cz> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 2月, 2012 1 次提交
-
-
由 Jesper Juhl 提交于
Signed-off-by: NJesper Juhl <jj@chaosbits.net> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 06 2月, 2012 1 次提交
-
-
由 Jesper Juhl 提交于
Signed-off-by: NJesper Juhl <jj@chaosbits.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 10月, 2011 1 次提交
-
-
由 Jay Vosburgh 提交于
This patch resolves two sets of race conditions. Mitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> reported the first, as follows: The bond_close() calls cancel_delayed_work() to cancel delayed works. It, however, cannot cancel works that were already queued in workqueue. The bond_open() initializes work->data, and proccess_one_work() refers get_work_cwq(work)->wq->flags. The get_work_cwq() returns NULL when work->data has been initialized. Thus, a panic occurs. He included a patch that converted the cancel_delayed_work calls in bond_close to flush_delayed_work_sync, which eliminated the above problem. His patch is incorporated, at least in principle, into this patch. In this patch, we use cancel_delayed_work_sync in place of flush_delayed_work_sync, and also convert bond_uninit in addition to bond_close. This conversion to _sync, however, opens new races between bond_close and three periodically executing workqueue functions: bond_mii_monitor, bond_alb_monitor and bond_activebackup_arp_mon. The race occurs because bond_close and bond_uninit are always called with RTNL held, and these workqueue functions may acquire RTNL to perform failover-related activities. If bond_close or bond_uninit is waiting in cancel_delayed_work_sync, deadlock occurs. These deadlocks are resolved by having the workqueue functions acquire RTNL conditionally. If the rtnl_trylock() fails, the functions reschedule and return immediately. For the cases that are attempting to perform link failover, a delay of 1 is used; for the other cases, the normal interval is used (as those activities are not as time critical). Additionally, the bond_mii_monitor function now stores the delay in a variable (mimicing the structure of activebackup_arp_mon). Lastly, all of the above renders the kill_timers sentinel moot, and therefore it has been removed. Tested-by: NMitsuo Hayasaka <mitsuo.hayasaka.hu@hitachi.com> Signed-off-by: NJay Vosburgh <fubar@us.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 10月, 2011 1 次提交
-
-
由 Flavio Leitner 提交于
The port shouldn't be enabled unless its current MUX state is DISTRIBUTING which is correctly handled by ad_mux_machine(), otherwise the packet sent can be lost because the other end may not be ready. The issue happens on every port initialization, but as the ports are expected to move quickly to DISTRIBUTING, it doesn't cause much problem. However, it does cause constant packet loss if the other peer has the port configured to stay in STANDBY (i.e. SYNC set to OFF). Signed-off-by: NFlavio Leitner <fbl@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 10月, 2011 1 次提交
-
-
由 Andy Gospodarek 提交于
During a test where a pair of bonding interfaces using ARP monitoring were both brought up and torn down (with an rmmod) repeatedly, a panic in the timer code was noticed. I tracked this down and determined that any of the bonding functions that ran as workqueue handlers and requeued more work might not properly exit when the module was removed. There was a flag protected by the bond lock called kill_timers that is set when the interface goes down or the module is removed, but many of the functions that monitor link status now unlock the bond lock to take rtnl first. There is a chance that another CPU running the rmmod could get the lock and set kill_timers after the first check has passed. This patch does not allow any function to queue work that will make itself run unless kill_timers is not set. I also noticed while doing this work that bond_resend_igmp_join_requests did not have a check for kill_timers, so I added the needed call there as well. Signed-off-by: NAndy Gospodarek <andy@greyhouse.net> Reported-by: NLiang Zheng <lzheng@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 6月, 2011 1 次提交
-
-
由 stephen hemminger 提交于
This adds support for a configuring the minimum number of links that must be active before asserting carrier. It is similar to the Cisco EtherChannel min-links feature. This allows setting the minimum number of member ports that must be up (link-up state) before marking the bond device as up (carrier on). This is useful for situations where higher level services such as clustering want to ensure a minimum number of low bandwidth links are active before switchover. See: http://bugzilla.vyatta.com/show_bug.cgi?id=7196Signed-off-by: NStephen Hemminger <shemminger@vyatta.com> Signed-off-by: NFlavio Leitner <fbl@redhat.com> Signed-off-by: NAndy Gospodarek <andy@greyhouse.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 6月, 2011 1 次提交
-
-
由 Peter Pan(潘卫平) 提交于
Dan Carpenter found that there was a dereference before a check, added in 56d00c67(bonding:delete lacp_fast from ad_bond_info). Signed-off-by: NWeiping Pan <panweiping3@gmail.com> Signed-off-by: NJay Vosburgh <fubar@us.ibm.com> Signed-off-by: NDavid S. Miller <davem@conan.davemloft.net>
-
- 10 6月, 2011 3 次提交
-
-
由 Peter Pan(潘卫平) 提交于
bond_params->ad_select and ad_bond_info->agg_select_mode have the same meaning, they are duplicate and need extra synchronization. __get_agg_selection_mode() get ad_select from bond_params directly. Signed-off-by: NWeiping Pan <panweiping3@gmail.com> Signed-off-by: NJay Vosburgh <fubar@us.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peter Pan(潘卫平) 提交于
These is also a bug, that if you modify lacp_rate via sysfs, and add new slaves in bonding, new slaves won't use the latest lacp_rate, since ad_bond_info->lacp_fast is initialized only once, in bond_3ad_initialize(). Since both struct bond_params and ad_bond_info have lacp_fast, they are duplicate and need extra synchronization. bond_3ad_bind_slave() can use bond_params->lacp_fast to initialize port. So we can just remove lacp_fast from struct ad_bond_info. Signed-off-by: NWeiping Pan <panweiping3@gmail.com> Signed-off-by: NJay Vosburgh <fubar@us.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Peter Pan(潘卫平) 提交于
There is bug that when you modify lacp_rate via sysfs, 802.3ad won't use the new value of lacp_rate to transmit packets. This is because port->actor_oper_port_state isn't changed. Signed-off-by: NWeiping Pan <panweiping3@gmail.com> Signed-off-by: NJay Vosburgh <fubar@us.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 10 5月, 2011 1 次提交
-
-
由 Michał Mirosław 提交于
Pull read_lock(&bond->lock) and BOND_IS_OK() to bond_start_xmit() from mode-dependent xmit functions. netif_running() is always true in hard_start_xmit. Signed-off-by: NMichał Mirosław <mirq-linux@rere.qmqm.pl> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 4月, 2011 1 次提交
-
-
由 Jiri Pirko 提交于
Since now when bonding uses rx_handler, all traffic going into bond device goes thru bond_handle_frame. So there's no need to go back into bonding code later via ptype handlers. This patch converts original ptype handlers into "bonding receive probes". These functions are called from bond_handle_frame and they are registered per-mode. Note that vlan packets are also handled because they are always untagged thanks to vlan_untag() Note that this also allows arpmon for eth-bond-bridge-vlan topology. Signed-off-by: NJiri Pirko <jpirko@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 4月, 2011 1 次提交
-
-
由 Jiri Bohac 提交于
The slave member of struct aggregator does not necessarily point to a slave which is part of the aggregator. It points to the slave structure containing the aggregator structure, while completely different slaves (or no slaves at all) may be part of the aggregator. The agg_device_up() function wrongly uses agg->slave to find the state of the aggregator. Use agg->lag_ports->slave instead. The bug has been introduced by commit 4cd6fe1c ("bonding: fix link down handling in 802.3ad mode"). Signed-off-by: NJiri Bohac <jbohac@suse.cz> Signed-off-by: NJay Vosburgh <fubar@us.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 4月, 2011 1 次提交
-
-
由 David Decotigny 提交于
The __get_link_speed() function returns a u16 value which was stored in a u32 local variable. This patch uses the return value directly, thus fixing that minor type consistency. The 'duplex' field in struct slave being encoded on 8 bits, to be more consistent we use a u8 integer (instead of u16) whenever we copy it to local variables. Signed-off-by: NDavid Decotigny <decot@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 3月, 2011 1 次提交
-
-
由 Jiri Pirko 提交于
transfers slave->state into slave->backup (that it's going to transfer into bitfield. Introduce wrapper inlines to do the work with it. Signed-off-by: NJiri Pirko <jpirko@redhat.com> Reviewed-by: NNicolas de Pesloüan <nicolas.2p.debian@free.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-