- 10 5月, 2010 19 次提交
-
-
由 Gustavo F. Padovan 提交于
Now the code handles the case under SREJ_SENT state. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
tx_seq var already has the value of __get_reqseq(). Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
It also fix a bug: we weren't acknowledging I-frames when P=1. Note that when F=1 we are acknowledging packets before setting RemoteBusy to False. The spec says we should do that in the opposite order, but acknowledment of packets doesn't care about RemoteBusy flag so we can do that in the order we want. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
We weren't handling the receipt under SREJ_SENT state table. It also introduce l2cap_send_srejtail(). It will be used in the nexts commits too. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
Create a function for each type fo S-frame and avoid a lot of nested code. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
This finishes the implementation of Recv RR (P=0)(F=0) for the Enhanced Retransmission Mode on L2CAP. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
Abstract the send of of P-bit and avoids code duplication like we did with the setting of F-bit. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
Shall be used to ack received frames, It must decide type of acknowledgment between a RR frame, a RNR frame or transmission of pending I-frames. It also modifies l2cap_ertm_send() to report the number of frames sent. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
After reassembly the SDU we need to check his size. It can't overflow the MTU size. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
All packets with size fewer than the minimum specified is dropped. Note that the size of the l2cap basic header, FCS and SAR fields are already subtracted of len at the moment of the size check. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
On receipt of a F=1 under WAIT_F state ERTM shall stop monitor timer and start retransmission timer (if there are unacked frames). Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
After receive a RR with P bit set ERTM shall use this funcion to choose what type of frame to reply with F bit = 1. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
Trivial clean up. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
ERTM and Streaming Modes was having problems when the ACL MTU is lower than MPS. The 'minus 10' is to take in account the header and fcs lenghts. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
All operation related to the txWindow should be modulo 64. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
l2cap_data_channel do not free the S-frame, so we free it here. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
Inside "case L2CAP_MODE_BASIC:" we don't need to check for sk_type and L2CAP mode. So only the length check is fine. Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
Remove extra braces and labels, break over column 80 lines, etc Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
由 Gustavo F. Padovan 提交于
It also removes an unneeded check for the MTU. The check is done before on sco_send_frame() Signed-off-by: NGustavo F. Padovan <padovan@profusion.mobi> Reviewed-by: NJoão Paulo Rechi Vita <jprvita@profusion.mobi> Signed-off-by: NMarcel Holtmann <marcel@holtmann.org>
-
- 08 5月, 2010 1 次提交
-
-
由 Neil Horman 提交于
A while back there was a discussion regarding the rt_secret_interval timer. Given that we've had the ability to do emergency route cache rebuilds for awhile now, based on a statistical analysis of the various hash chain lengths in the cache, the use of the flush timer is somewhat redundant. This patch removes the rt_secret_interval sysctl, allowing us to rely solely on the statistical analysis mechanism to determine the need for route cache flushes. Signed-off-by: NNeil Horman <nhorman@tuxdriver.com> Acked-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 5月, 2010 2 次提交
-
-
由 Eric Dumazet 提交于
Introduce ____napi_schedule() helper for callers in irq disabled contexts. rps_trigger_softirq() becomes a leaf function. Use container_of() in process_backlog() instead of accessing per_cpu address. Use a custom inlined version of __napi_complete() in process_backlog() to avoid one locked instruction : only current cpu owns and manipulates this napi, and NAPI_STATE_SCHED is the only possible flag set on backlog. we can use a plain write instead of clear_bit(), and we dont need an smp_mb() memory barrier, since RPS is on, backlog is protected by a spinlock. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Bjørn Mork 提交于
Adding addresses and ports to the short packet log message, like ipv4/udp.c does it, makes these messages a lot more useful: [ 822.182450] UDPv6: short packet: From [2001:db8:ffb4:3::1]:47839 23715/178 to [2001:db8:ffb4:3:5054:ff:feff:200]:1234 This requires us to drop logging in case pskb_may_pull() fails, which also is consistent with ipv4/udp.c Signed-off-by: NBjørn Mork <bjorn@mork.no> Acked-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 5月, 2010 3 次提交
-
-
由 WANG Cong 提交于
Based on the previous patch, make bridge support netpoll by: 1) implement the 2 methods to support netpoll for bridge; 2) modify netpoll during forwarding packets via bridge; 3) disable netpoll support of bridge when a netpoll-unabled device is added to bridge; 4) enable netpoll support when all underlying devices support netpoll. Cc: David Miller <davem@davemloft.net> Cc: Neil Horman <nhorman@tuxdriver.com> Cc: Stephen Hemminger <shemminger@linux-foundation.org> Cc: Matt Mackall <mpm@selenic.com> Signed-off-by: NWANG Cong <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 WANG Cong 提交于
This whole patchset is for adding netpoll support to bridge and bonding devices. I already tested it for bridge, bonding, bridge over bonding, and bonding over bridge. It looks fine now. To make bridge and bonding support netpoll, we need to adjust some netpoll generic code. This patch does the following things: 1) introduce two new priv_flags for struct net_device: IFF_IN_NETPOLL which identifies we are processing a netpoll; IFF_DISABLE_NETPOLL is used to disable netpoll support for a device at run-time; 2) introduce one new method for netdev_ops: ->ndo_netpoll_cleanup() is used to clean up netpoll when a device is removed. 3) introduce netpoll_poll_dev() which takes a struct net_device * parameter; export netpoll_send_skb() and netpoll_poll_dev() which will be used later; 4) hide a pointer to struct netpoll in struct netpoll_info, ditto. 5) introduce ->real_dev for struct netpoll. 6) introduce a new status NETDEV_BONDING_DESLAE, which is used to disable netconsole before releasing a slave, to avoid deadlocks. Cc: David Miller <davem@davemloft.net> Cc: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: NWANG Cong <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
"mac80211: improve IBSS scanning" was missing a hunk. This adds that hunk as originally intended. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 05 5月, 2010 1 次提交
-
-
由 Eric Dumazet 提交于
With following patch I can reach maximum rate of my pktgen+udpsink simulator : - 'old' machine : dual quad core E5450 @3.00GHz - 64 UDP rx flows (only differ by destination port) - RPS enabled, NIC interrupts serviced on cpu0 - rps dispatched on 7 other cores. (~130.000 IPI per second) - SLAB allocator (faster than SLUB in this workload) - tg3 NIC - 1.080.000 pps without a single drop at NIC level. Idea is to add two prefetchw() calls in __alloc_skb(), one to prefetch first sk_buff cache line, the second to prefetch the shinfo part. Also using one memset() to initialize all skb_shared_info fields instead of one by one to reduce number of instructions, using long word moves. All skb_shared_info fields before 'dataref' are cleared in __alloc_skb(). Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 5月, 2010 6 次提交
-
-
由 Eric Dumazet 提交于
Commit 4b0b72f7 ( net: speedup udp receive path ) introduced a bug in skb_free_datagram_locked(). We should not skb_orphan() skb if we dont have the guarantee we are the last skb user, this might happen with MSG_PEEK concurrent users. To keep socket locked for the smallest period of time, we split consume_skb() logic, inlined in skb_free_datagram_locked() Reported-by: NStephen Hemminger <shemminger@vyatta.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Add hlist_for_each_entry_rcu_bh() and hlist_for_each_entry_continue_rcu_bh() macros, and use them in ipv6_get_ifaddr(), if6_get_first() and if6_get_next() to fix lockdeps warnings. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Reviewed-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ilpo Järvinen 提交于
Worse yet, it seems that its arguments were in reverse order. Also remove one related helper which seems hardly worth keeping. Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
When IBSS is fixed to a frequency, it can still scan to try to find the right BSSID. This makes sense if the BSSID isn't also fixed, but it need not scan all channels -- just one is sufficient. Make it do that by moving the scan setup code to ieee80211_request_internal_scan() and include a channel variable setting. Note that this can be further improved to start the IBSS right away if both frequency and BSSID are fixed. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
This allows enabling TX and disabling both TX and RX aggregation sessions manually in debugfs. It is very useful for debugging session initiation and teardown problems since with this you don't have to force a lot of traffic to get aggregation and thus have less data to analyse. Also, to debug mac80211 code itself, make hwsim "support" aggregation sessions. It will still just transfer the frame, but go through the setup and teardown handshakes. Signed-off-by: NJohannes Berg <johannes@sipsolutions.net> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
由 Johannes Berg 提交于
Both of these functions can currently return a station pointer that, to the driver, is invalid (in IBSS mode only) because adding the station failed. Check for that, and also make ieee80211_find_sta() properly use the per interface station search. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
-
- 03 5月, 2010 1 次提交
-
-
由 Changli Gao 提交于
Per cpu variable softnet_data.total was shared between IRQ and SoftIRQ context without any protection. And enqueue_to_backlog should update the netdev_rx_stat of the target CPU. This patch renames softnet_data.total to softnet_data.processed: the number of packets processed in uppper levels(IP stacks). softnet_stat data is moved into softnet_data. Signed-off-by: NChangli Gao <xiaosuo@gmail.com> ---- include/linux/netdevice.h | 17 +++++++---------- net/core/dev.c | 26 ++++++++++++-------------- net/sched/sch_generic.c | 2 +- 3 files changed, 20 insertions(+), 25 deletions(-) Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 5月, 2010 2 次提交
-
-
由 David S. Miller 提交于
In commit 6be8ac2f ("[NET]: uninline skb_pull, de-bloats a lot") we uninlined skb_pull. But in some critical paths it makes sense to inline this thing and it helps performance significantly. Create an skb_pull_inline() so that we can do this in a way that serves also as annotation. Based upon a patch by Eric Dumazet. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we need two atomic operations (and associated dirtying) per incoming packet. RCU conversion is pretty much needed : 1) Add a new structure, called "struct socket_wq" to hold all fields that will need rcu_read_lock() protection (currently: a wait_queue_head_t and a struct fasync_struct pointer). [Future patch will add a list anchor for wakeup coalescing] 2) Attach one of such structure to each "struct socket" created in sock_alloc_inode(). 3) Respect RCU grace period when freeing a "struct socket_wq" 4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct socket_wq" 5) Change sk_sleep() function to use new sk->sk_wq instead of sk->sk_sleep 6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside a rcu_read_lock() section. 7) Change all sk_has_sleeper() callers to : - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock) - Use wq_has_sleeper() to eventually wakeup tasks. - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock) 8) sock_wake_async() is modified to use rcu protection as well. 9) Exceptions : macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq" instead of dynamically allocated ones. They dont need rcu freeing. Some cleanups or followups are probably needed, (possible sk_callback_lock conversion to a spinlock for example...). Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 01 5月, 2010 5 次提交
-
-
由 Vlad Yasevich 提交于
When we create the sctp_datamsg and fragment the user data, we know exactly if we are sending full segments or not and how they might be bundled. During this time, we can mark messages a Nagle capable or not. This makes the check at transmit time much simpler. Signed-off-by: NVlad Yasevich <vladislav.yasevich@hp.com>
-
由 Vlad Yasevich 提交于
Right now, if the highest tsn in the SACK doesn't change, we'll end up scanning the transmitted lists on the transports twice: once for locating the highest _new_ tsn, and once for actually tagging chunks as acked. This is a waste, since we can record the highest _new_ tsn at the same time as tagging chunks. Long ago this was not possible because we would try to mark chunks as missing at the same time as tagging them acked and this approach didn't work. Now that the two steps are separate, we can re-use the old approach. Signed-off-by: NVlad Yasevich <vladislav.yasevich@hp.com>
-
由 Vlad Yasevich 提交于
According to RFC 4960 Section 7.2.4: If an endpoint is in Fast Recovery and a SACK arrives that advances the Cumulative TSN Ack Point, the miss indications are incremented for all TSNs reported missing in the SACK. Signed-off-by: NVlad Yasevich <vladislav.yasevich@hp.com>
-
由 Vlad Yasevich 提交于
rwnd_press tracks the pressure on the recieve window. Every timer the receive buffer overlows, we truncate the receive window and then grow it back. However, if we don't track the cumulative presser, it's possible to reach a situation when receive buffer is empty, but rwnd stays truncated. Signed-off-by: NVlad Yasevich <vladislav.yasevich@hp.com>
-
由 Vlad Yasevich 提交于
SCTP fast recovery algorithm really applies per association and impacts all transports. Signed-off-by: NVlad Yasevich <vladislav.yasevich@hp.com>
-