- 21 10月, 2011 15 次提交
-
-
由 Eric Dumazet 提交于
bnx2x allocates a full page per fragment. We must account in skb->truesize, the size of the fragment, not the used part of it. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> CC: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ian Campbell 提交于
I've split this bit out of the skb frag destructor patch since it helps enforce the use of the fragment API. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ian Campbell 提交于
Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Mike Christie <michaelc@cs.wisc.edu> Cc: James Bottomley <James.Bottomley@suse.de> Cc: Karen Xie <kxie@chelsio.com> Cc: linux-scsi@vger.kernel.org Cc: netdev@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ian Campbell 提交于
Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: Casey Leedom <leedom@chelsio.com> Cc: netdev@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ian Campbell 提交于
Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: Dimitris Michailidis <dm@chelsio.com> Cc: netdev@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ian Campbell 提交于
Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: netdev@vger.kernel.org Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Maciej Żenczykowski 提交于
Up till now the IP{,V6}_TRANSPARENT socket options (which actually set the same bit in the socket struct) have required CAP_NET_ADMIN privileges to set or clear the option. - we make clearing the bit not require any privileges. - we allow CAP_NET_ADMIN to set the bit (as before this change) - we allow CAP_NET_RAW to set this bit, because raw sockets already pretty much effectively allow you to emulate socket transparency. Signed-off-by: NMaciej Żenczykowski <maze@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Preliminary patch before tcp constification Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
tcp_fin() only needs socket pointer, we can remove skb and th params. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ricardo 提交于
This patch enables the ethtool interface. The implementation is done using the libphy helper functions. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 RongQing Li 提交于
control these three function declarations and definitions with same macro CONFIG_PCI_IOV drivers/net/ethernet/intel/igb/igb_main.c:165: warning: ‘igb_vf_configure’ declared ‘static’ but never defined drivers/net/ethernet/intel/igb/igb_main.c:166: warning: ‘igb_find_enabled_vfs’ declared ‘static’ but never defined drivers/net/ethernet/intel/igb/igb_main.c:167: warning: ‘igb_check_vf_assignment’ declared ‘static’ but never defined Signed-off-by: NRongQing Li <roy.qing.li@gmail.com> Acked-by: NGreg Rose <gregory.v.rose@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
skb->truesize must account for allocated memory, not the used part of it. Doing this work is important to avoid unexpected OOM situations. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> CC: Jon Mason <mason@myri.com> Acked-by: NJon Mason <mason@myri.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
igbvf allocates half a page per skb fragment. We must account PAGE_SIZE/2 increments on skb->truesize, not the actual frag length. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Daniel Turull reported inaccuracies in pktgen when using low packet rates, because we call ndelay(val) with values bigger than 20000. Instead of calling ndelay() for delays < 100us, we can instead loop calling ktime_now() only. Reported-by: NDaniel Turull <daniel.turull@gmail.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
Since commit 356f0398 (TCP: increase default initial receive window.), we allow sender to send 10 (TCP_DEFAULT_INIT_RCVWND) segments. Change tcp_fixup_rcvbuf() to reflect this change, even if no real change is expected, since sysctl_tcp_rmem[1] = 87380 and this value is bigger than tcp_fixup_rcvbuf() computed rcvmem (~23720) Note: Since commit 356f0398 limited default window to maximum of 10*1460 and 2*MSS, we use same heuristic in this patch. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 10月, 2011 25 次提交
-
-
由 Ian Campbell 提交于
A few network drivers currently use skb_frag_struct for this purpose but I have patches which add additional fields and semantics there which these other uses do not want. A structure for reference sub-page regions seems like a generally useful thing so do so instead of adding a network subsystem specific structure. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Acked-by: NJens Axboe <jaxboe@fusionio.com> Acked-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
skb->truesize must account for allocated memory, not the used part of it. Doing this work is important to avoid unexpected OOM situations. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> CC: Yevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Krishna Kumar 提交于
Remove manual initialization in set_skb_frag, and instead use __skb_fill_page_desc() to do the same. Patch tested on net-next. Signed-off-by: NKrishna Kumar <krkumar2@in.ibm.com> Acked-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ian Campbell 提交于
I audited all of the callers in the tree and only one of them (pktgen) expects it to do so. Taking this reference is pretty obviously confusing and error prone. In particular I looked at the following commits which switched callers of (__)skb_frag_set_page to the skb paged fragment api: 6a930b9f cxgb3: convert to SKB paged frag API. 5dc3e196 myri10ge: convert to SKB paged frag API. 0e0634d2 vmxnet3: convert to SKB paged frag API. 86ee8130 virtionet: convert to SKB paged frag API. 4a22c4c9 sfc: convert to SKB paged frag API. 18324d69 cassini: convert to SKB paged frag API. b061b39e benet: convert to SKB paged frag API. b7b6a688 bnx2: convert to SKB paged frag API. 804cf14e net: xfrm: convert to SKB frag APIs ea2ab693 net: convert core to skb paged frag APIs Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Acked-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 roy.qing.li@gmail.com 提交于
when use dst_get_neighbour to get neighbour, we need rcu_read_lock to protect, since dst_get_neighbour uses rcu_dereference. The bug was reported by Ari Savolainen <ari.m.savolainen@gmail.com> [ 105.612095] [ 105.612096] =================================================== [ 105.612100] [ INFO: suspicious rcu_dereference_check() usage. ] [ 105.612101] --------------------------------------------------- [ 105.612103] include/net/dst.h:91 invoked rcu_dereference_check() without protection! [ 105.612105] [ 105.612106] other info that might help us debug this: [ 105.612106] [ 105.612108] [ 105.612108] rcu_scheduler_active = 1, debug_locks = 0 [ 105.612110] 1 lock held by dnsmasq/2618: [ 105.612111] #0: (rtnl_mutex){+.+.+.}, at: [<ffffffff815df8c7>] rtnl_lock+0x17/0x20 [ 105.612120] [ 105.612121] stack backtrace: [ 105.612123] Pid: 2618, comm: dnsmasq Not tainted 3.1.0-rc1 #41 [ 105.612125] Call Trace: [ 105.612129] [<ffffffff810ccdcb>] lockdep_rcu_dereference+0xbb/0xc0 [ 105.612132] [<ffffffff815dc5a9>] neigh_update+0x4f9/0x5f0 [ 105.612135] [<ffffffff815da001>] ? neigh_lookup+0xe1/0x220 [ 105.612139] [<ffffffff81639298>] arp_req_set+0xb8/0x230 [ 105.612142] [<ffffffff8163a59f>] arp_ioctl+0x1bf/0x310 [ 105.612146] [<ffffffff810baa40>] ? lock_hrtimer_base.isra.26+0x30/0x60 [ 105.612150] [<ffffffff8163fb75>] inet_ioctl+0x85/0x90 [ 105.612154] [<ffffffff815b5520>] sock_do_ioctl+0x30/0x70 [ 105.612157] [<ffffffff815b55d3>] sock_ioctl+0x73/0x280 [ 105.612162] [<ffffffff811b7698>] do_vfs_ioctl+0x98/0x570 [ 105.612165] [<ffffffff811a5c40>] ? fget_light+0x340/0x3a0 [ 105.612168] [<ffffffff811b7bbf>] sys_ioctl+0x4f/0x80 [ 105.612172] [<ffffffff816fdcab>] system_call_fastpath+0x16/0x1b Reported-by: NAri Savolainen <ari.m.savolainen@gmail.com> Signed-off-by: NRongQing <roy.qing.li@gmail.com> Acked-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Dan Carpenter 提交于
This is just a cleanup. My testing version of Smatch warns about this: net/core/filter.c +380 check_load_and_stores(6) warn: check 'flen' for negative values flen comes from the user. We try to clamp the values here between 1 and BPF_MAXINSNS but the clamp doesn't work because it could be negative. This is a bug, but it's not exploitable. Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Grant Grundler 提交于
"ethtool -e ethX" dumps EEPROM data. Patch sets EEPROM length for device. Ethtool works alot better when the kernel believes the length is > 0. From: Allan Chou <allan@asix.com.tw> Signed-off-by: NGrant Grundler <grundler@chromium.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Kevin Wilson 提交于
This cleanup patch removes unnecessary include from net/ipv6/ip6_fib.c. Signed-off-by: NKevin Wilson <wkevils@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Gerrit Renker 提交于
ipv4: compat_ioctl is local to af_inet.c, make it static Signed-off-by: NGerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Giuseppe CAVALLARO 提交于
Problem using big mtu around 4096 bytes is you end allocating (4096 +NET_SKB_PAD + NET_IP_ALIGN + sizeof(struct skb_shared_info) bytes -> 8192 bytes : order-1 pages It's better to limit the mtu to SKB_MAX_HEAD(NET_SKB_PAD), to have no more than one page per skb. Also the patch changes the netdev_alloc_skb_ip_align() done in init_dma_desc_rings() and uses a variant allowing GFP_KERNEL allocations allowing the driver to load even in case of memory pressure. Reported-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Giuseppe CAVALLARO 提交于
This patch enhances the STMMAC driver to support CHAINED mode of descriptor. STMMAC supports DMA descriptor to operate both in dual buffer(RING) and linked-list(CHAINED) mode. In RING mode (default) each descriptor points to two data buffer pointers whereas in CHAINED mode they point to only one data buffer pointer. In CHAINED mode each descriptor will have pointer to next descriptor in the list, hence creating the explicit chaining in the descriptor itself, whereas such explicit chaining is not possible in RING mode. First version of this work has been done by Rayagond. Then the patch has been reworked avoiding ifdef inside the C code. A new header file has been added to define all the functions needed for managing enhanced and normal descriptors. In fact, these have to be specialized according to the ring/chain usage. Two new C files have been also added to implement the helper routines needed to manage: jumbo frames, chain and ring setup (i.e. desc3). Signed-off-by: NRayagond Kokatanur <rayagond@vayavyalabs.com> Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Giuseppe CAVALLARO 提交于
Enable the MMC support if it is actually available from the HW capability register. Signed-off-by: NRayagond Kokatanur <rayagond@vayavyalabs.com> Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rayagond Kokatanur 提交于
Signed-off-by: NRayagond Kokatanur <rayagond@vayavyalabs.com> Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Giuseppe CAVALLARO 提交于
This patch allows to set the mtu bigger than 1500 in case of normal descriptors. This is helping some SPEAr customers. Signed-off-by: NDeepak SIKRI <deepak.sikri@st.com> Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Giuseppe CAVALLARO 提交于
Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Giuseppe CAVALLARO 提交于
This patch fixes a problem raised on Orly ARM SMP platform where, in case of fragmented frames, the descriptors in the TX ring resulted broken. This was due to a missing lock protection in the tx process. Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Tested-by: NSrinivas Kandagatla <srinivas.kandagatla@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Srinivas Kandagatla 提交于
This patch stops advertising 1000Base capablities if GMAC is either configured for MII or RMII mode and on board there is a GPHY plugged on. Without this patch if an GBit switch is connected on MII interface, Ethernet stops working at all. Discovered as part of https://bugzilla.stlinux.com/show_bug.cgi?id=14148 triage Signed-off-by: NSrinivas Kandagatla <srinivas.kandagatla@st.com> Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric W. Biederman 提交于
sysfs is a core piece of ifrastructure that many people use and few people have all of the rules in their head on how to use it correctly. Add warnings for people using tagged directories improperly to that any misuses can be caught and diagnosed quickly. A single inexpensive test in sysfs_find_dirent is almost sufficient to catch all possible misuses. An additional warning is needed in sysfs_add_dirent so that we actually fail when attempting to add an untagged dirent in a tagged directory. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric W. Biederman 提交于
Now that /sys/class/net/bonding_masters is implemented as a tagged sysfs file we can remove support for untagged files in tagged directories. This change removes any ambiguity of what a NULL namespace value means. A NULL namespace parameter after this patch means that we are talking about an untagged sysfs dirent. This makes the sysfs code much less prone to mistakes when during maintenance. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric W. Biederman 提交于
This fixes a network namespace misfeature that bonding_masters looked at current instead of the remembering the context where in which /sys/class/net/bonding_masters was opened in to see which network namespace to act upon. This removes the need for sysfs to handle tagged directories with untagged members allowing for a conceptually simpler sysfs implementation. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric W. Biederman 提交于
Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric W. Biederman 提交于
Looking up files in sysfs is hard to understand and analyize because we currently allow placing untagged files in tagged directories. In the implementation of that we have two subtly different meanings of NULL. NULL meaning there is no tag on a directory entry and NULL meaning we don't care which namespace the lookup is performed for. This multiple uses of NULL have resulted in subtle bugs (since fixed) in the code. Currently it is only the bonding driver that needs to have an untagged file in a tagged directory. To untagle this mess I am adding support for tagged files to sysfs. Modifying the bonding driver to implement bonding_masters as a tagged file. Registering bonding_masters once for each network namespace. Then I am removing support for untagged entries in tagged sysfs directories. Resulting in code that is much easier to reason about. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Flavio Leitner 提交于
The port shouldn't be enabled unless its current MUX state is DISTRIBUTING which is correctly handled by ad_mux_machine(), otherwise the packet sent can be lost because the other end may not be ready. The issue happens on every port initialization, but as the ports are expected to move quickly to DISTRIBUTING, it doesn't cause much problem. However, it does cause constant packet loss if the other peer has the port configured to stay in STANDBY (i.e. SYNC set to OFF). Signed-off-by: NFlavio Leitner <fbl@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Richard Cochran 提交于
This patch adds a sanity check on the values provided by user space for the hardware time stamping configuration. If the values lie outside of the absolute limits, then the ioctl request will be denied. Signed-off-by: NRichard Cochran <richard.cochran@omicron.at> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric W. Biederman 提交于
This patch moves the rcu_barrier from rollback_registered_many (inside the rtnl_lock) into netdev_run_todo (just outside the rtnl_lock). This allows us to gain the full benefit of sychronize_net calling synchronize_rcu_expedited when the rtnl_lock is held. The rcu_barrier in rollback_registered_many was originally a synchronize_net but was promoted to be a rcu_barrier() when it was found that people were unnecessarily hitting the 250ms wait in netdev_wait_allrefs(). Changing the rcu_barrier back to a synchronize_net is therefore safe. Since we only care about waiting for the rcu callbacks before we get to netdev_wait_allrefs() it is also safe to move the wait into netdev_run_todo. This was tested by creating and destroying 1000 tap devices and observing /proc/lock_stat. /proc/lock_stat reports this change reduces the hold times of the rtnl_lock by a factor of 10. There was no observable difference in the amount of time it takes to destroy a network device. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-