- 21 4月, 2016 1 次提交
-
-
由 Alexander Duyck 提交于
This patch enables bulk free in Tx cleanup for fm10k and cleans up the boolean logic in the polling routines for fm10k in the hopes of avoiding any mix-ups similar to what occurred with i40e and i40evf. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 06 4月, 2016 5 次提交
-
-
由 Jacob Keller 提交于
The fm10k driver used its own code for generating a default indirection table on device load, which was not the same as the default generated by ethtool when indir_size of 0 is passed to SRXFH. Take advantage of ethtool_rxfh_indir_default() and simplify code to write the redirection table to reduce some code duplication. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Fix a kernel panic that occurs during surprise removal. Clear the interface queue counts upon fm10k_init_msix_capability failure. This prevents further code (fm10k_update_stats etc.) from attempting to access unallocated queue vector or ring memory. [ 628.692648] BUG: unable to handle kernel NULL pointer dereference at 0000000000000068 [ 628.692805] IP: [<ffffffffa0475caf>] fm10k_update_stats+0x7f/0x2c0 [fm10k] [ 628.693173] PGD 0 [ 628.693759] Oops: 0000 [#1] SMP [ 628.699321] CPU: 10 PID: 8164 Comm: kworker/10:0 Tainted: G OE ------------ 3.10.0-327.el7.x86_64 #1 [ 628.700096] Hardware name: Supermicro X9DAi/X9DAi, BIOS 3.2 05/09/2015 [ 628.700894] Workqueue: pciehp-1 pciehp_power_thread [ 628.701686] task: ffff88086559c500 ti: ffff8808593c0000 task.ti: ffff8808593c0000 [ 628.702493] RIP: 0010:[<ffffffffa0475caf>] [<ffffffffa0475caf>] fm10k_update_stats+0x7f/0x2c0 [fm10k] [ 628.703310] RSP: 0018:ffff8808593c3b00 EFLAGS: 00010282 [ 628.704132] RAX: 0000000000000000 RBX: ffff880860760000 RCX: 0000000000000000 [ 628.704963] RDX: ffff880860760b08 RSI: 0000000000000000 RDI: 0000000000000000 [ 628.705794] RBP: ffff8808593c3b40 R08: 0000000000000000 R09: 0000000000000000 [ 628.706604] R10: 0000000000000000 R11: ffff880860760c40 R12: 0000000000000080 [ 628.707420] R13: ffff8808607608c0 R14: ffff880860779ec0 R15: ffff880860779f40 [ 628.708238] FS: 0000000000000000(0000) GS:ffff88086f000000(0000) knlGS:0000000000000000 [ 628.709071] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 628.709923] CR2: 0000000000000068 CR3: 000000000194a000 CR4: 00000000001407e0 [ 628.710752] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 628.711596] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 628.712438] Stack: [ 628.713255] ffff880860764458 ffff8808607608c0 ffff880860760000 ffff880860760000 [ 628.714088] 0000000000000080 ffff8808607608c0 ffff880860779ec0 ffff880860779f40 [ 628.714925] ffff8808593c3b88 ffffffffa04780c5 ffff880860764458 0000000a8163cb5b [ 628.715752] Call Trace: [ 628.716560] [<ffffffffa04780c5>] fm10k_down+0x155/0x1f0 [fm10k] [ 628.717367] [<ffffffffa0479958>] fm10k_close+0x28/0xd0 [fm10k] [ 628.718184] [<ffffffff81526365>] __dev_close_many+0x85/0xd0 [ 628.718986] [<ffffffff815264d8>] dev_close_many+0x98/0x120 [ 628.719764] [<ffffffff81527ab8>] rollback_registered_many+0xa8/0x230 [ 628.720527] [<ffffffff81527c80>] rollback_registered+0x40/0x70 [ 628.721294] [<ffffffff81529198>] unregister_netdevice_queue+0x48/0x80 [ 628.722052] [<ffffffff815291ec>] unregister_netdev+0x1c/0x30 [ 628.722816] [<ffffffffa04762b8>] fm10k_remove+0xd8/0xe0 [fm10k] [ 628.723581] [<ffffffff81328c7b>] pci_device_remove+0x3b/0xb0 [ 628.724340] [<ffffffff813f5fbf>] __device_release_driver+0x7f/0xf0 [ 628.725088] [<ffffffff813f6053>] device_release_driver+0x23/0x30 [ 628.725814] [<ffffffff81321fe4>] pci_stop_bus_device+0x94/0xa0 [ 628.726535] [<ffffffff813220d2>] pci_stop_and_remove_bus_device+0x12/0x20 [ 628.727249] [<ffffffff8133de40>] pciehp_unconfigure_device+0xb0/0x1b0 [ 628.727964] [<ffffffff8133d822>] pciehp_disable_slot+0x52/0xd0 [ 628.728664] [<ffffffff8133d98a>] pciehp_power_thread+0xea/0x150 [ 628.729358] [<ffffffff8109d5fb>] process_one_work+0x17b/0x470 [ 628.730036] [<ffffffff8109e3cb>] worker_thread+0x11b/0x400 [ 628.730730] [<ffffffff8109e2b0>] ? rescuer_thread+0x400/0x400 [ 628.731385] [<ffffffff810a5aef>] kthread+0xcf/0xe0 [ 628.732036] [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140 [ 628.732674] [<ffffffff81645858>] ret_from_fork+0x58/0x90 [ 628.733289] [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140 [ 628.733883] Code: 83 e8 01 48 8d 97 40 02 00 00 45 31 c0 4c 8d 9c c7 48 02 0 [ 628.735202] RIP [<ffffffffa0475caf>] fm10k_update_stats+0x7f/0x2c0 [fm10k] [ 628.735732] RSP <ffff8808593c3b00> [ 628.736285] CR2: 0000000000000068 [ 628.736846] ---[ end trace 9156088b311aff42 ]--- Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
In fm10k_set_num_queues, we previously assigned the base template. This would always be overwritten by either fm10k_set_qos_queues or fm10k_set_rss_queues. In either case, we don't need the base values, so we can just remove them. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bruce Allan 提交于
Use BIT() macro instead. Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bruce Allan 提交于
The semantic patch that makes this change is available in scripts/coccinelle/misc/compare_const_fl.cocci. More information about semantic patching is available at http://coccinelle.lip6.fr/Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 18 3月, 2016 1 次提交
-
-
由 Joonsoo Kim 提交于
The success of CMA allocation largely depends on the success of migration and key factor of it is page reference count. Until now, page reference is manipulated by direct calling atomic functions so we cannot follow up who and where manipulate it. Then, it is hard to find actual reason of CMA allocation failure. CMA allocation should be guaranteed to succeed so finding offending place is really important. In this patch, call sites where page reference is manipulated are converted to introduced wrapper function. This is preparation step to add tracepoint to each page reference manipulation function. With this facility, we can easily find reason of CMA allocation failure. There is no functional change in this patch. In addition, this patch also converts reference read sites. It will help a second step that renames page._count to something else and prevents later attempt to direct access to it (Suggested by Andrew). Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 2月, 2016 1 次提交
-
-
由 Keller, Jacob E 提交于
Also print an error message incase we do have to reconfigure as this should no longer happen anymore due to ethtool changes. If it somehow does occur, user should be made aware of it. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 12月, 2015 1 次提交
-
-
由 Bruce Allan 提交于
Cleans up checkpatch GLOBAL_INITIALIZERS error Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 14 12月, 2015 3 次提交
-
-
由 Bruce Allan 提交于
Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
If the q_vector allocation fails we should free the resources associated with the MSI-X vector table. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Reviewed-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
We haven't bumped the driver version in a while despite many fixes being pulled in from the out-of-tree Sourceforge driver. Update the version to match. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 06 12月, 2015 3 次提交
-
-
由 Jacob Keller 提交于
Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
The existing adaptive ITR algorithm is overly restrictive. It throttles incorrectly for various traffic rates, and does not produce good performance. The algorithm now allows for more interrupts per second, and does some calculation to help improve for smaller packet loads. In addition, take into account the new itr_scale from the hardware which indicates how much to scale due to PCIe link speed. Reported-by: NMatthew Vick <matthew.vick@intel.com> Reported-by: NAlex Duyck <alexander.duyck@gmail.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Define a macro for identifying when the itr value is dynamic or adaptive. The concept was taken from i40e. This helps make clear what the check is, and reduces the line length to something more reasonable in a few places. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Reviewed-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 24 11月, 2015 1 次提交
-
-
由 Alexander Duyck 提交于
This patch corrects an issue in which the polling routine would increase the budget for Rx to at least 1 per queue if multiple queues were present. This would result in Rx packets being processed when the budget was 0 which is meant to indicate that no Rx can be handled. Signed-off-by: NAlexander Duyck <aduyck@mirantis.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 16 10月, 2015 1 次提交
-
-
由 Jesse Brandeburg 提交于
As per Eric Dumazet's previous patches: (see commit (24d2e4a5) - tg3: use napi_complete_done()) Quoting verbatim: Using napi_complete_done() instead of napi_complete() allows us to use /sys/class/net/ethX/gro_flush_timeout GRO layer can aggregate more packets if the flush is delayed a bit, without having to set too big coalescing parameters that impact latencies. </end quote> Tested configuration: low latency via ethtool -C ethx adaptive-rx off rx-usecs 10 adaptive-tx off tx-usecs 15 workload: streaming rx using netperf TCP_MAERTS igb: MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.1 () port 0 AF_INET : demo ... Interim result: 941.48 10^6bits/s over 1.000 seconds ending at 1440193171.589 Alignment Offset Bytes Bytes Recvs Bytes Sends Local Remote Local Remote Xfered Per Per Recv Send Recv Send Recv (avg) Send (avg) 8 8 0 0 1176930056 1475.36 797726 16384.00 71905 MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.0.1 () port 0 AF_INET : demo ... Interim result: 941.49 10^6bits/s over 0.997 seconds ending at 1440193142.763 Alignment Offset Bytes Bytes Recvs Bytes Sends Local Remote Local Remote Xfered Per Per Recv Send Recv Send Recv (avg) Send (avg) 8 8 0 0 1175182320 50476.00 23282 16384.00 71816 i40e: Hard to test because the traffic is incoming so fast (24Gb/s) that GRO always receives 87kB, even at the highest interrupt rate. Other drivers were only compile tested. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 14 10月, 2015 1 次提交
-
-
由 Jacob Keller 提交于
Check for actual value NETREG_UNINITIALIZED in case it ever changes from the current value of zero. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 23 9月, 2015 1 次提交
-
-
由 Jacob Keller 提交于
Add a private ethtool flag to enable display of these statistics, which are generally less useful. However, sometimes it can be useful for debugging purposes. The most useful portion is the ability to see what the PF thinks the VF mailboxes look like. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 16 9月, 2015 2 次提交
-
-
由 Jacob Keller 提交于
This patch ensures that VLAN traffic on the default VID will go to the corresponding VLAN device if it exists. To do this, mask the rx_ring VID if we have an active VLAN on that VID. For this to work correctly, we need to update fm10k_process_skb_fields to correctly mask off the VLAN_PRIO_MASK bits and compare them separately, otherwise we incorrectly compare the priority bits with the cleared flag. This also happens to fix a related bug where having priority bits set causes us to incorrectly classify traffic. Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This change pulls out the optimization that assumed that all fragments would be limited to page size. That hasn't been the case for some time now and to assume this is incorrect as the TCP allocator can provide up to a 32K page fragment. Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Acked-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 22 8月, 2015 1 次提交
-
-
由 Michal Hocko 提交于
Commit c48a11c7 ("netvm: propagate page->pfmemalloc to skb") added checks for page->pfmemalloc to __skb_fill_page_desc(): if (page->pfmemalloc && !page->mapping) skb->pfmemalloc = true; It assumes page->mapping == NULL implies that page->pfmemalloc can be trusted. However, __delete_from_page_cache() can set set page->mapping to NULL and leave page->index value alone. Due to being in union, a non-zero page->index will be interpreted as true page->pfmemalloc. So the assumption is invalid if the networking code can see such a page. And it seems it can. We have encountered this with a NFS over loopback setup when such a page is attached to a new skbuf. There is no copying going on in this case so the page confuses __skb_fill_page_desc which interprets the index as pfmemalloc flag and the network stack drops packets that have been allocated using the reserves unless they are to be queued on sockets handling the swapping which is the case here and that leads to hangs when the nfs client waits for a response from the server which has been dropped and thus never arrive. The struct page is already heavily packed so rather than finding another hole to put it in, let's do a trick instead. We can reuse the index again but define it to an impossible value (-1UL). This is the page index so it should never see the value that large. Replace all direct users of page->pfmemalloc by page_is_pfmemalloc which will hide this nastiness from unspoiled eyes. The information will get lost if somebody wants to use page->index obviously but that was the case before and the original code expected that the information should be persisted somewhere else if that is really needed (e.g. what SLAB and SLUB do). [akpm@linux-foundation.org: fix blooper in slub] Fixes: c48a11c7 ("netvm: propagate page->pfmemalloc to skb") Signed-off-by: NMichal Hocko <mhocko@suse.com> Debugged-by: NVlastimil Babka <vbabka@suse.com> Debugged-by: NJiri Bohac <jbohac@suse.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: David Miller <davem@davemloft.net> Acked-by: NMel Gorman <mgorman@suse.de> Cc: <stable@vger.kernel.org> [3.6+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 6月, 2015 1 次提交
-
-
由 Alexander Duyck 提交于
This change folds the fm10k_pull_tail call into fm10k_add_rx_frag. The advantage to doing this is that the fragment doesn't have to be modified after it is added to the skb. Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 04 5月, 2015 1 次提交
-
-
由 Alexander Duyck 提交于
The netpoll path will call napi->poll with a budget of 0 in order to clean the Tx rings only. This change updates the fm10k driver so that it will correctly support that instead of cleaning 1 Rx frame if a budget of 0 is received. Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 4月, 2015 4 次提交
-
-
由 Jeff Kirsher 提交于
With the recent driver changes, bump the version. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com>
-
由 Jeff Kirsher 提交于
Since we run the watchdog periodically, which might take a while and potentially monopolize the system default workqueue, create our own separate work queue. This also helps reduce and stabilize latency between scheduling the work in our interrupt and actually performing the work. Still use a timer for the regular scheduled interval but queue the work onto its own work queue. It seemed overkill to create a single workqueue per interface, so we just spawn a single work queue for all interfaces upon driver load. For this reason, use a multi-threaded workqueue with one thread per processor, rather than single threaded queue. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
Since we already print this message when a reset is requested via the RESET_REQUESTED flag, we do not need to print it before setting the flag. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jeff Kirsher 提交于
The were several functions which had parameters which were never or sometimes used in functions. To resolve possible compiler warnings, use __always_unused or __maybe_unused kernel macros to resolve. Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Acked-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 03 3月, 2015 3 次提交
-
-
由 Matthew Vick 提交于
Fix a few silly typos in the code and checkpatch warnings in support of general code cleanliness. Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Matthew Vick 提交于
The introduction of ndo_features_check allows drivers to report their offload capabilities per-skb. Implement this in fm10k to take advantage of this new functionality. Reported-by: NJoe Stringer <joestringer@nicira.com> Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Matthew Vick 提交于
The FM10000 host interface can only support up to 184 bytes when performing tunnel offloads. Because of this, a check was added to prevent the driver from attempting to feed a header to the hardware too big for it to parse. Make this check a little more robust by calculating the inner L4 header length based on whether it is TCP or UDP. Cc: Joe Stringer <joestringer@nicira.com> Signed-off-by: NMatthew Vick <matthew.vick@intel.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 23 1月, 2015 2 次提交
-
-
由 Joe Stringer 提交于
fm10k supports up to 184 bytes of inner+outer headers. Add an initial check to fail encap offload if these are too large. Signed-off-by: NJoe Stringer <joestringer@nicira.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch cleans up the page reuse code getting it into a state where all the workarounds needed are in place as well as cleaning up a few minor oversights such as using __free_pages instead of put_page to drop a locally allocated page. It also cleans up how we clear the descriptor status bits. Previously they were zeroed as a part of clearing the hdr_addr. However the hdr_addr is a 64 bit field and 64 bit writes can be a bit more expensive on on 32 bit systems. Since we are no longer using the header split feature the upper 32 bits of the address no longer need to be cleared. As a result we can just clear the status bits and leave the length and VLAN fields as-is which should provide more information in debugging. Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Tested-by: NKrishneil Singh <Krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 14 1月, 2015 1 次提交
-
-
由 Jiri Pirko 提交于
The same macros are used for rx as well. So rename it. Signed-off-by: NJiri Pirko <jiri@resnulli.us> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 12月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
This change makes it so that dma_rmb is used when reading the Rx descriptor. The advantage of dma_rmb is that it allows for a much lower cost barrier on x86, powerpc, arm, and arm64 architectures than a traditional memory barrier when dealing with reads that only have to synchronize to coherent memory. In addition I have updated the code so that it just checks to see if any bits have been set instead of just the DD bit since the DD bit will always be set as a part of a descriptor write-back so we just need to check for a non-zero value being present at that memory location rather than just checking for any specific bit. This allows the code itself to appear much cleaner and allows the compiler more room to optimize. Cc: Matthew Vick <matthew.vick@intel.com> Cc: Don Skidmore <donald.c.skidmore@intel.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 12月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
This change replaces calls to netdev_alloc_skb_ip_align with napi_alloc_skb. The advantage of napi_alloc_skb is currently the fact that the page allocation doesn't make use of any irq disable calls. There are few spots where I couldn't replace the calls as the buffer allocation routine is called as a part of init which is outside of the softirq context. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 09 12月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
Update the Intel Ethernet drivers to use eth_skb_pad() and skb_put_padto instead of doing their own implementations of the function. Also this cleans up two other spots where skb_pad was called but the length and tail pointers were being manipulated directly instead of just having the padding length added via __skb_put. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 11月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
The Intel drivers were pretty much just using the plain vanilla GFP flags in their calls to __skb_alloc_page so this change makes it so that they use dev_alloc_page which just uses GFP_ATOMIC for the gfp_flags value. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Matthew Vick <matthew.vick@intel.com> Cc: Don Skidmore <donald.c.skidmore@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 15 10月, 2014 1 次提交
-
-
由 Alexander Duyck 提交于
This change adds support for skb->xmit_more based on the changes that were made to igb to support the feature. The main changes are moving up the check for maybe_stop_tx so that we can check netif_xmit_stopped to determine if we must write the tail because we can add no further buffers. Acked-by: NMatthew Vick <matthew.vick@intel.com> Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 10月, 2014 1 次提交
-
-
由 Eric Dumazet 提交于
This is illegal to use atomic_set(&page->_count, 2) even if we 'own' the page. Other entities in the kernel need to use get_page_unless_zero() to get a reference to the page before testing page properties, so we could loose a refcount increment. Signed-off-by: NEric Dumazet <edumazet@google.com> Acked-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-