- 12 7月, 2012 17 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
And delete rt6_redirect(), since it is no longer used. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Hook it into dst_ops->redirect as well. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
This sets things up so that we can have the protocol error handlers call down into the ipv6 route code for redirects just as ipv4 already does. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
This is going to be used internally by the rt6 redirect code. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
No longer needed, as the protocol handlers now all properly propagate the redirect back into the routing code. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
All of the redirect acceptance policy is now contained within. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Pass in the SKB rather than just the IP addresses, so that policy and other aspects can reside in ip_rt_redirect() rather then icmp_redirect(). Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
And thus, we can remove the ping_err() hack. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
This introduce TSQ (TCP Small Queues) TSQ goal is to reduce number of TCP packets in xmit queues (qdisc & device queues), to reduce RTT and cwnd bias, part of the bufferbloat problem. sk->sk_wmem_alloc not allowed to grow above a given limit, allowing no more than ~128KB [1] per tcp socket in qdisc/dev layers at a given time. TSO packets are sized/capped to half the limit, so that we have two TSO packets in flight, allowing better bandwidth use. As a side effect, setting the limit to 40000 automatically reduces the standard gso max limit (65536) to 40000/2 : It can help to reduce latencies of high prio packets, having smaller TSO packets. This means we divert sock_wfree() to a tcp_wfree() handler, to queue/send following frames when skb_orphan() [2] is called for the already queued skbs. Results on my dev machines (tg3/ixgbe nics) are really impressive, using standard pfifo_fast, and with or without TSO/GSO. Without reduction of nominal bandwidth, we have reduction of buffering per bulk sender : < 1ms on Gbit (instead of 50ms with TSO) < 8ms on 100Mbit (instead of 132 ms) I no longer have 4 MBytes backlogged in qdisc by a single netperf session, and both side socket autotuning no longer use 4 Mbytes. As skb destructor cannot restart xmit itself ( as qdisc lock might be taken at this point ), we delegate the work to a tasklet. We use one tasklest per cpu for performance reasons. If tasklet finds a socket owned by the user, it sets TSQ_OWNED flag. This flag is tested in a new protocol method called from release_sock(), to eventually send new segments. [1] New /proc/sys/net/ipv4/tcp_limit_output_bytes tunable [2] skb_orphan() is usually called at TX completion time, but some drivers call it in their start_xmit() handler. These drivers should at least use BQL, or else a single TCP session can still fill the whole NIC TX ring, since TSQ will have no effect. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Dave Taht <dave.taht@bufferbloat.net> Cc: Tom Herbert <therbert@google.com> Cc: Matt Mathis <mattmathis@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Nandita Dukkipati <nanditad@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexander Duyck 提交于
The recent patch "tcp: Maintain dynamic metrics in local cache." introduced an out of bounds access due to what appears to be a typo. I believe this change should resolve the issue by replacing the access to RTAX_CWND with TCP_METRIC_CWND. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 7月, 2012 23 次提交
-
-
由 David S. Miller 提交于
Fixes build when ipv6 is disabled. Reported-by: NFengguang Wu <wfg@linux.intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Li RongQing 提交于
mld->mld_maxdelay is net endian, so we should use ntohs, not htons CC: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: NLi RongQing <roy.qing.li@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Li RongQing 提交于
commit 6d29b1ef introduces a bug, ntohs is __be16_to_cpu, not cpu_to_be16. We always use htons on IP_OFFSET and IP_MF, then compare with network package. Signed-off-by: NLi RongQing <roy.qing.li@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Li RongQing 提交于
ETH_P_IP is host Endian, skb->protocol is big Endian, when compare them, Using htons on skb->protocol is wrong. And fix two code style issues: indentation and remove unnecessary parentheses. CC: Tristram Ha <Tristram.Ha@micrel.com> CC: Ben Hutchings <bhutchings@solarflare.com> CC: Joe Perches <joe@perches.com> Signed-off-by: NLi RongQing <roy.qing.li@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/net由 David S. Miller 提交于
Conflicts: net/batman-adv/bridge_loop_avoidance.c net/batman-adv/bridge_loop_avoidance.h net/batman-adv/soft-interface.c net/mac80211/mlme.c With merge help from Antonio Quartulli (batman-adv) and Stephen Rothwell (drivers/net/usb/qmi_wwan.c). The net/mac80211/mlme.c conflict seemed easy enough, accounting for a conversion to some new tracing macros. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Chan 提交于
In rare cases, bnx2x_free_tx_skbs() can unmap the wrong DMA address when it gets to the last entry of the tx ring. We were not using the proper macro to skip the last entry when advancing the tx index. Reported-by: NZongyun Lai <zlai@vmware.com> Reviewed-by: NJeffrey Huang <huangjw@broadcom.com> Signed-off-by: NMichael Chan <mchan@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Or Gerlitz reported triggering of WARN_ON_ONCE(delta < len); in skb_try_coalesce() This warning tracks drivers that incorrectly set skb->truesize IPoIB indeed allocates a full page to store a fragment, but only accounts in skb->truesize the used part of the page (frame length) This patch fixes skb truesize underestimation, and also fixes a performance issue, because RX skbs have not enough tailroom to allow IP and TCP stacks to pull their header in skb linear part without an expensive call to pskb_expand_head() Signed-off-by: NEric Dumazet <edumazet@google.com> Reported-by: NOr Gerlitz <ogerlitz@mellanox.com> Cc: Erez Shitrit <erezsh@mellanox.com> Cc: Shlomo Pongartz <shlomop@mellanox.com> Cc: Roland Dreier <roland@purestorage.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Amir Hanania 提交于
In driver reload test there is a memory leak. The structure vlan_info was not freed when the driver was removed. It was not released since the nr_vids var is one after last vlan was removed. The nr_vids is one, since vlan zero is added to the interface when the interface is being set, but the vlan zero is not deleted at unregister. Fix - delete vlan zero when we unregister the device. Signed-off-by: NAmir Hanania <amir.hanania@intel.com> Acked-by: NJohn Fastabend <john.r.fastabend@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
git://git.open-mesh.org/linux-merge由 David S. Miller 提交于
Included changes: - fix a bug generated by the wrong interaction between the GW feature and the Bridge Loop Avoidance
-
由 Jitendra Kalsaria 提交于
Signed-off-by: NJitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jitendra Kalsaria 提交于
Signed-off-by: NJitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jitendra Kalsaria 提交于
Signed-off-by: NJitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jitendra Kalsaria 提交于
Signed-off-by: NJitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jitendra Kalsaria 提交于
Missing case was causing ethtool self test to print garbage value in extra info section. Signed-off-by: NJitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jitendra Kalsaria 提交于
Signed-off-by: NJitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jitendra Kalsaria 提交于
Signed-off-by: NJitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jitendra Kalsaria 提交于
Signed-off-by: NJitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jitendra Kalsaria 提交于
TX queue was being stopped at beginning of send path instead of at the end when last descriptor is used. Signed-off-by: NJitendra Kalsaria <jitendra.kalsaria@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rob Herring 提交于
Enabling RX cut-thru mode yields better performance as received frames start getting written to memory before a whole frame is received. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rob Herring 提交于
Increase the number of outstanding read and write AXI transactions from 1 to 8 for better performance. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rob Herring 提交于
Fix intermittent hangs in xgmac_rx_refill. If a ring buffer entry already had an skb allocated, then xgmac_rx_refill would get stuck in a loop. This can happen on a rx error when we just leave the skb allocated to the entry. [ 7884.510000] INFO: rcu_preempt detected stall on CPU 0 (t=727315 jiffies) [ 7884.510000] [<c0010a59>] (unwind_backtrace+0x1/0x98) from [<c006fd93>] (__rcu_pending+0x11b/0x2c4) [ 7884.510000] [<c006fd93>] (__rcu_pending+0x11b/0x2c4) from [<c0070b95>] (rcu_check_callbacks+0xed/0x1a8) [ 7884.510000] [<c0070b95>] (rcu_check_callbacks+0xed/0x1a8) from [<c0036abb>] (update_process_times+0x2b/0x48) [ 7884.510000] [<c0036abb>] (update_process_times+0x2b/0x48) from [<c004e8fd>] (tick_sched_timer+0x51/0x94) [ 7884.510000] [<c004e8fd>] (tick_sched_timer+0x51/0x94) from [<c0045527>] (__run_hrtimer+0x4f/0x1e8) [ 7884.510000] [<c0045527>] (__run_hrtimer+0x4f/0x1e8) from [<c0046003>] (hrtimer_interrupt+0xd7/0x1e4) [ 7884.510000] [<c0046003>] (hrtimer_interrupt+0xd7/0x1e4) from [<c00101d3>] (twd_handler+0x17/0x24) [ 7884.510000] [<c00101d3>] (twd_handler+0x17/0x24) from [<c006be39>] (handle_percpu_devid_irq+0x59/0x114) [ 7884.510000] [<c006be39>] (handle_percpu_devid_irq+0x59/0x114) from [<c0069aab>] (generic_handle_irq+0x17/0x2c) [ 7884.510000] [<c0069aab>] (generic_handle_irq+0x17/0x2c) from [<c000cc8d>] (handle_IRQ+0x35/0x7c) [ 7884.510000] [<c000cc8d>] (handle_IRQ+0x35/0x7c) from [<c033b153>] (__irq_svc+0x33/0xb8) [ 7884.510000] [<c033b153>] (__irq_svc+0x33/0xb8) from [<c0244b06>] (xgmac_rx_refill+0x3a/0x140) [ 7884.510000] [<c0244b06>] (xgmac_rx_refill+0x3a/0x140) from [<c02458ed>] (xgmac_poll+0x265/0x3bc) [ 7884.510000] [<c02458ed>] (xgmac_poll+0x265/0x3bc) from [<c029fcbf>] (net_rx_action+0xc3/0x200) [ 7884.510000] [<c029fcbf>] (net_rx_action+0xc3/0x200) from [<c0030cab>] (__do_softirq+0xa3/0x1bc) Signed-off-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rob Herring 提交于
Fix net tx watchdog timeout recovery. The descriptor ring was reset, but the DMA engine was not reset to the beginning of the ring. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-