- 23 1月, 2015 1 次提交
-
-
由 Florian Westphal 提交于
Don't update Tx tail descriptor if we queue hasn't been stopped and we know at least one more skb will be sent right away. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 14 1月, 2015 1 次提交
-
-
由 Jiri Pirko 提交于
The same macros are used for rx as well. So rename it. Signed-off-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 12月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
This change replaces calls to netdev_alloc_skb_ip_align with napi_alloc_skb. The advantage of napi_alloc_skb is currently the fact that the page allocation doesn't make use of any irq disable calls. There are few spots where I couldn't replace the calls as the buffer allocation routine is called as a part of init which is outside of the softirq context. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 12月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
Update the Intel Ethernet drivers to use eth_skb_pad() and skb_put_padto instead of doing their own implementations of the function. Also this cleans up two other spots where skb_pad was called but the length and tail pointers were being manipulated directly instead of just having the padding length added via __skb_put. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 10月, 2014 1 次提交
-
-
由 Francesco Ruggeri 提交于
VMWare's e1000 implementation does not seem to support unicast filtering. This can be observed by configuring a macvlan interface on eth0 in a VM in VMWare Fusion 5.0.5, and trying to use that interface instead of eth0. Tested on 3.16. Signed-off-by: NFrancesco Ruggeri <fruggeri@arista.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 12 9月, 2014 7 次提交
-
-
由 Florian Westphal 提交于
napi_gro_frags allows skb re-use in case GRO can merge payload pages into an skb on the GRO lists. netperf TCP_STREAM, kvm-e1000 emulation, mtu 9k: Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec old: 87380 16384 16384 30.00 8985.78 new: 87380 16384 16384 30.00 9907.05 Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
Instead of preallocating Rx skbs, allocate them right before sending inbound packet up the stack. e1000-kvm, mtu1500, netperf TCP_STREAM: Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec old: 87380 16384 16384 60.00 4532.40 new: 87380 16384 16384 60.00 4599.05 Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
and remove *page, its only used for Rx. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
e1000 uses the same metadata struct for Rx and Tx. But Tx and Rx have different requirements. For Rx, we only need to store a buffer and a DMA address. Follow-up patch will remove skb for Rx, bringing rx_buffer_info down to 16 bytes on x86_64. [ buffer_info is 48 bytes ] Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
Currently we unmap the DMA range, then copy to new skb. Change this so we can keep the mapping in case the data is copied. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
Its the same in both handlers. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Florian Westphal 提交于
... and make it static. Signed-off-by: NFlorian Westphal <fw@strlen.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 26 8月, 2014 1 次提交
-
-
由 Vlad Yasevich 提交于
This device claims TSO and checksum support for vlans. It also allows a user to control vlan acceleration offloading. As such, it is possible to turn off vlan acceleration and configure a vlan which will continue to support TSO. In such situation the packet passed down the the device will contain a vlan header and skb->protocol will be set to ETH_P_8021Q. The device assumes that skb->protocol contains network protocol value and uses that value to set up TSO and checksum information. This will results in corrupted frames sent on the wire. This patch extract the protocol value correctly and corrects TSO for non-accelerated traffic. CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com> CC: Jesse Brandeburg <jesse.brandeburg@intel.com> CC: Bruce Allan <bruce.w.allan@intel.com> CC: Carolyn Wyborny <carolyn.wyborny@intel.com> CC: Don Skidmore <donald.c.skidmore@intel.com> CC: Greg Rose <gregory.v.rose@intel.com> CC: Alex Duyck <alexander.h.duyck@intel.com> CC: John Ronciak <john.ronciak@intel.com> CC: Mitch Williams <mitch.a.williams@intel.com> CC: Linux NICS <linux.nics@intel.com> CC: e1000-devel@lists.sourceforge.net Signed-off-by: NVladislav Yasevich <vyasevic@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 8月, 2014 1 次提交
-
-
由 Benoit Taine 提交于
We should prefer `struct pci_device_id` over `DEFINE_PCI_DEVICE_TABLE` to meet kernel coding style guidelines. This issue was reported by checkpatch. A simplified version of the semantic patch that makes this change is as follows (http://coccinelle.lip6.fr/): // <smpl> @@ identifier i; declarer name DEFINE_PCI_DEVICE_TABLE; initializer z; @@ - DEFINE_PCI_DEVICE_TABLE(i) + const struct pci_device_id i[] = z; // </smpl> [bhelgaas: add semantic patch] Signed-off-by: NBenoit Taine <benoit.taine@lip6.fr> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 04 6月, 2014 1 次提交
-
-
由 Yongjian Xu 提交于
There is no case skb->len would be 0 or 'negative'. Remove the check. Signed-off-by: NYongjian Xu <xuyongjiande@gmail.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 11 4月, 2014 1 次提交
-
-
由 Francois Romieu 提交于
Signed-off-by: NFrancois Romieu <romieu@fr.zoreil.com> Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 30 11月, 2013 3 次提交
-
-
由 Vladimir Davydov 提交于
On e1000_down(), we should ensure every asynchronous work is canceled before proceeding. Since the watchdog_task can schedule other works apart from itself, it should be stopped first, but currently it is stopped after the reset_task. This can result in the following race leading to the reset_task running after the module unload: e1000_down_and_stop(): e1000_watchdog(): ---------------------- ----------------- cancel_work_sync(reset_task) schedule_work(reset_task) cancel_delayed_work_sync(watchdog_task) The patch moves cancel_delayed_work_sync(watchdog_task) at the beginning of e1000_down_and_stop() thus ensuring the race is impossible. Cc: Tushar Dave <tushar.n.dave@intel.com> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: NVladimir Davydov <vdavydov@parallels.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Vladimir Davydov 提交于
The patch fixes the following lockdep warning, which is 100% reproducible on network restart: ====================================================== [ INFO: possible circular locking dependency detected ] 3.12.0+ #47 Tainted: GF ------------------------------------------------------- kworker/1:1/27 is trying to acquire lock: ((&(&adapter->watchdog_task)->work)){+.+...}, at: [<ffffffff8108a5b0>] flush_work+0x0/0x70 but task is already holding lock: (&adapter->mutex){+.+...}, at: [<ffffffffa0177c0a>] e1000_reset_task+0x4a/0xa0 [e1000] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&adapter->mutex){+.+...}: [<ffffffff810bdb5d>] lock_acquire+0x9d/0x120 [<ffffffff816b8cbc>] mutex_lock_nested+0x4c/0x390 [<ffffffffa017233d>] e1000_watchdog+0x7d/0x5b0 [e1000] [<ffffffff8108b972>] process_one_work+0x1d2/0x510 [<ffffffff8108ca80>] worker_thread+0x120/0x3a0 [<ffffffff81092c1e>] kthread+0xee/0x110 [<ffffffff816c3d7c>] ret_from_fork+0x7c/0xb0 -> #0 ((&(&adapter->watchdog_task)->work)){+.+...}: [<ffffffff810bd9c0>] __lock_acquire+0x1710/0x1810 [<ffffffff810bdb5d>] lock_acquire+0x9d/0x120 [<ffffffff8108a5eb>] flush_work+0x3b/0x70 [<ffffffff8108b5d8>] __cancel_work_timer+0x98/0x140 [<ffffffff8108b693>] cancel_delayed_work_sync+0x13/0x20 [<ffffffffa0170cec>] e1000_down_and_stop+0x3c/0x60 [e1000] [<ffffffffa01775b1>] e1000_down+0x131/0x220 [e1000] [<ffffffffa0177c12>] e1000_reset_task+0x52/0xa0 [e1000] [<ffffffff8108b972>] process_one_work+0x1d2/0x510 [<ffffffff8108ca80>] worker_thread+0x120/0x3a0 [<ffffffff81092c1e>] kthread+0xee/0x110 [<ffffffff816c3d7c>] ret_from_fork+0x7c/0xb0 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&adapter->mutex); lock((&(&adapter->watchdog_task)->work)); lock(&adapter->mutex); lock((&(&adapter->watchdog_task)->work)); *** DEADLOCK *** 3 locks held by kworker/1:1/27: #0: (events){.+.+.+}, at: [<ffffffff8108b906>] process_one_work+0x166/0x510 #1: ((&adapter->reset_task)){+.+...}, at: [<ffffffff8108b906>] process_one_work+0x166/0x510 #2: (&adapter->mutex){+.+...}, at: [<ffffffffa0177c0a>] e1000_reset_task+0x4a/0xa0 [e1000] stack backtrace: CPU: 1 PID: 27 Comm: kworker/1:1 Tainted: GF 3.12.0+ #47 Hardware name: System manufacturer System Product Name/P5B-VM SE, BIOS 0501 05/31/2007 Workqueue: events e1000_reset_task [e1000] ffffffff820f6000 ffff88007b9dba98 ffffffff816b54a2 0000000000000002 ffffffff820f5e50 ffff88007b9dbae8 ffffffff810ba936 ffff88007b9dbac8 ffff88007b9dbb48 ffff88007b9d8f00 ffff88007b9d8780 ffff88007b9d8f00 Call Trace: [<ffffffff816b54a2>] dump_stack+0x49/0x5f [<ffffffff810ba936>] print_circular_bug+0x216/0x310 [<ffffffff810bd9c0>] __lock_acquire+0x1710/0x1810 [<ffffffff8108a5b0>] ? __flush_work+0x250/0x250 [<ffffffff810bdb5d>] lock_acquire+0x9d/0x120 [<ffffffff8108a5b0>] ? __flush_work+0x250/0x250 [<ffffffff8108a5eb>] flush_work+0x3b/0x70 [<ffffffff8108a5b0>] ? __flush_work+0x250/0x250 [<ffffffff8108b5d8>] __cancel_work_timer+0x98/0x140 [<ffffffff8108b693>] cancel_delayed_work_sync+0x13/0x20 [<ffffffffa0170cec>] e1000_down_and_stop+0x3c/0x60 [e1000] [<ffffffffa01775b1>] e1000_down+0x131/0x220 [e1000] [<ffffffffa0177c12>] e1000_reset_task+0x52/0xa0 [e1000] [<ffffffff8108b972>] process_one_work+0x1d2/0x510 [<ffffffff8108b906>] ? process_one_work+0x166/0x510 [<ffffffff8108ca80>] worker_thread+0x120/0x3a0 [<ffffffff8108c960>] ? manage_workers+0x2c0/0x2c0 [<ffffffff81092c1e>] kthread+0xee/0x110 [<ffffffff81092b30>] ? __init_kthread_worker+0x70/0x70 [<ffffffff816c3d7c>] ret_from_fork+0x7c/0xb0 [<ffffffff81092b30>] ? __init_kthread_worker+0x70/0x70 == The issue background == The problem occurs, because e1000_down(), which is called under adapter->mutex by e1000_reset_task(), tries to synchronously cancel e1000 auxiliary works (reset_task, watchdog_task, phy_info_task, fifo_stall_task), which take adapter->mutex in their handlers. So the question is what does adapter->mutex protect there? The adapter->mutex was introduced by commit 0ef4ee ("e1000: convert to private mutex from rtnl") as a replacement for rtnl_lock() taken in the asynchronous handlers. It targeted on fixing a similar lockdep warning issued when e1000_down() was called under rtnl_lock(), and it fixed it, but unfortunately it introduced the lockdep warning described above. Anyway, that said the source of this bug is that the asynchronous works were made to take rtnl_lock() some time ago, so let's look deeper and find why it was added there. The rtnl_lock() was added to asynchronous handlers by commit 338c15 ("e1000: fix occasional panic on unload") in order to prevent asynchronous handlers from execution after the module is unloaded (e1000_down() is called) as it follows from the comment to the commit: > Net drivers in general have an issue where timers fired > by mod_timer or work threads with schedule_work are running > outside of the rtnl_lock. > > With no other lock protection these routines are vulnerable > to races with driver unload or reset paths. > > The longer term solution to this might be a redesign with > safer locks being taken in the driver to guarantee no > reentrance, but for now a safe and effective fix is > to take the rtnl_lock in these routines. I'm not sure if this locking scheme fixed the problem or just made it unlikely, although I incline to the latter. Anyway, this was long time ago when e1000 auxiliary works were implemented as timers scheduling real work handlers in their routines. The e1000_down() function only canceled the timers, but left the real handlers running if they were running, which could result in work execution after module unload. Today, the e1000 driver uses sane delayed works instead of the pair timer+work to implement its delayed asynchronous handlers, and the e1000_down() synchronously cancels all the works so that the problem that commit 338c15 tried to cope with disappeared, and we don't need any locks in the handlers any more. Moreover, any locking there can potentially result in a deadlock. So, this patch reverts commits 0ef4ee and 338c15. Fixes: 0ef4eedc ("e1000: convert to private mutex from rtnl") Fixes: 338c15e4 ("e1000: fix occasional panic on unload") Cc: Tushar Dave <tushar.n.dave@intel.com> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: NVladimir Davydov <vdavydov@parallels.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 yzhu1 提交于
This change is based on a similar change made to e1000e support in commit bb9e44d0 ("e1000e: prevent oops when adapter is being closed and reset simultaneously"). The same issue has also been observed on the older e1000 cards. Here, we have increased the RESET_COUNT value to 50 because there are too many accesses to e1000 nic on stress tests to e1000 nic, it is not enough to set RESET_COUT 25. Experimentation has shown that it is enough to set RESET_COUNT 50. Signed-off-by: Nyzhu1 <yanjun.zhu@windriver.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 01 11月, 2013 1 次提交
-
-
由 Hong Zhiguo 提交于
tx_ring and adapter->tx_ring are already of type "struct e1000_tx_ring *" Signed-off-by: NHong Zhiguo <zhiguohong@tencent.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 22 9月, 2013 1 次提交
-
-
由 Russell King 提交于
Replace the following sequence: dma_set_mask(dev, mask); dma_set_coherent_mask(dev, mask); with a call to the new helper dma_set_mask_and_coherent(). Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 4月, 2013 3 次提交
-
-
由 Patrick McHardy 提交于
Add a protocol argument to the VLAN packet tagging functions. In case of HW tagging, we need that protocol available in the ndo_start_xmit functions, so it is stored in a new field in the skb. The new field fits into a hole (on 64 bit) and doesn't increase the sks's size. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Change the rx_{add,kill}_vid callbacks to take a protocol argument in preparation of 802.1ad support. The protocol argument used so far is always htons(ETH_P_8021Q). Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Patrick McHardy 提交于
Rename the hardware VLAN acceleration features to include "CTAG" to indicate that they only support CTAGs. Follow up patches will introduce 802.1ad server provider tagging (STAGs) and require the distinction for hardware not supporting acclerating both. Signed-off-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 3月, 2013 1 次提交
-
-
由 Joe Perches 提交于
I believe these error messages are already logged on allocation failure by warn_alloc_failed and so get a dump_stack on OOM. Remove the unnecessary additional error logging. Around these deletions: o Alignment neatening. o Remove unnecessary casts of dma_alloc_coherent. o Hoist assigns from ifs. Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 2月, 2013 1 次提交
-
-
由 Jeff Kirsher 提交于
Fixes whitespace issues, such as lines exceeding 80 chars, needless blank lines and the use of spaces where tabs are needed. In addition, fix multi-line comments to align with the networking standard. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com>
-
- 09 2月, 2013 1 次提交
-
-
由 Joe Perches 提交于
alloc failures already get standardized OOM messages and a dump_stack. For the affected mallocs around these OOM messages: Converted kmallocs with multiplies to kmalloc_array. Converted a kmalloc/memcpy to kmemdup. Removed now unused stack variables. Removed unnecessary parentheses. Neatened alignment. Signed-off-by: NJoe Perches <joe@perches.com> Acked-by: NArend van Spriel <arend@broadcom.com> Acked-by: NMarc Kleine-Budde <mkl@pengutronix.de> Acked-by: NJohn W. Linville <linville@tuxdriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 1月, 2013 1 次提交
-
-
由 Jiri Pirko 提交于
perm_addr is initialized correctly in register_netdevice() so to init it in drivers is no longer needed. Signed-off-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 08 12月, 2012 1 次提交
-
-
由 Greg Kroah-Hartman 提交于
The __dev* removal patches for the network drivers ended up messing up the function prototypes for a bunch of drivers. This patch fixes all of them back up to be properly aligned. Bonus is that this almost removes 100 lines of code, always a nice surprise. Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 04 12月, 2012 1 次提交
-
-
由 Bill Pemberton 提交于
CONFIG_HOTPLUG is going away as an option. As result the __dev* markings will be going away. Remove use of __devinit, __devexit_p, __devinitdata, __devinitconst, and __devexit. Signed-off-by: NBill Pemberton <wfp5p@virginia.edu> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Cc: Bruce Allan <bruce.w.allan@intel.com> Cc: Carolyn Wyborny <carolyn.wyborny@intel.com> Cc: Don Skidmore <donald.c.skidmore@intel.com> Cc: Greg Rose <gregory.v.rose@intel.com> Cc: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Cc: Alex Duyck <alexander.h.duyck@intel.com> Cc: John Ronciak <john.ronciak@intel.com> Cc: Tushar Dave <tushar.n.dave@intel.com> Cc: e1000-devel@lists.sourceforge.net Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 19 9月, 2012 1 次提交
-
-
由 Tushar Dave 提交于
On PCI/PCI-X HW, if packet size is less than ETH_ZLEN, packets may get corrupted during padding by HW. To WA this issue, pad all small packets manually. Signed-off-by: NTushar Dave <tushar.n.dave@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 9月, 2012 1 次提交
-
-
Signed-off-by: NOtto Estuardo Solares Cabrera <solca@galileo.edu> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 08 9月, 2012 1 次提交
-
-
由 Stephen Hemminger 提交于
Signed-off-by: NStephen Hemminger <shemminger@vyatta.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 21 8月, 2012 1 次提交
-
-
由 Jesse Brandeburg 提交于
This is the implementation in e1000 to allow ethtool to force MDI state, allowing users to work around some improperly behaving switches. Forcing in this driver is for now only allowed when auto-neg is enabled. To use must have the matching version of ethtool app that supports this functionality. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> CC: Tushar Dave <tushar.n.dave@intel.com> Tested-by: Aaron Brown aaron.f.brown@intel.com Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 11 7月, 2012 1 次提交
-
-
由 Ben Hutchings 提交于
Convert doxygen (or similar) formatted comments to kernel-doc or unformatted comment. Delete a few that are content-free. Signed-off-by: NBen Hutchings <bhutchings@solarflare.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 6月, 2012 1 次提交
-
-
由 Tushar Dave 提交于
Signed-off-by: NTushar Dave <tushar.n.dave@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 02 6月, 2012 1 次提交
-
-
This is another fixup where the data is not transfered into buffer addressed by skb->data but into a page. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 18 5月, 2012 1 次提交
-
-
由 Tushar Dave 提交于
Killing reset task while adapter is resetting causes deadlock. Only kill reset task if adapter is not resetting. Ref bug #43132 on bugzilla.kernel.org CC: stable@vger.kernel.org Signed-off-by: NTushar Dave <tushar.n.dave@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 5月, 2012 2 次提交
-
-
The code seems to want to look at the last byte where the HW puts some information. Since the skb->data area is never seen by the HW I guess it does not work as expected. We pass the page address to the HW so I *think* in order to get to the last byte where the information might be one should use the page buffer and take a look. This is of course not more than just compile tested. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
According to the comment, errata 23 says that the memory we allocate can't cross a 64KiB boundary. In case of jumbo frames we allocate complete pages which can never cross the 64KiB boundary because PAGE_SIZE should be a multiple of 64KiB so we stop either before the boundary or start after it but never cross it. Furthermore the check seems bogus because it looks at skb->data which is not seen by the HW at all because we only pass the DMA address of the page we allocated. So I *think* the workaround is not required here. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-