- 16 2月, 2017 9 次提交
-
-
由 Alexander Duyck 提交于
This change makes it so that we use the length of the packet instead of the DD status bit to determine if a new descriptor is ready to be processed. The obvious advantage is that it cuts down on reads as we don't really even need the DD bit if going from a 0 to a non-zero value on size is enough to inform us that the packet has been completed. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
In order to support build_skb with jumbo frames it will be necessary to use 3K buffers for the Rx path with 8K pages backing them. This is needed on architectures that implement 4K pages because we can't support 2K buffers plus padding in a 4K page. In the case of systems that support page sizes larger than 4K the 3K attribute will only be applied to FCoE as we can fall back to using just 2K buffers and adding the padding. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
Batch the page count updates instead of doing them one at a time. By doing this we can improve the overall performance as the atomic increment operations can be expensive due to the fact that on x86 they are locked operations which can cause stalls. By doing bulk updates we can consolidate the stall which should help to improve the overall receive performance. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Acked-by: NJohn Fastabend <john.r.fastabend@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch adds support for DMA_ATTR_SKIP_CPU_SYNC and DMA_ATTR_WEAK_ORDERING. By enabling both of these for the Rx path we are able to see performance improvements on architectures that implement either one due to the fact that page mapping and unmapping only has to sync what is actually being used instead of the entire buffer. In addition by enabling the weak ordering attribute enables a performance improvement for architectures that can associate a memory ordering with a DMA buffer such as Sparc. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
On some platforms, syncing a buffer for DMA is expensive. Rather than sync the whole 2K receive buffer, only synchronise the length of the frame, which will typically be the MTU, or a much smaller TCP ACK. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch consolidates the code for the ixgbe driver so that it is more inline with what is already in igb. The general idea is to just consolidate functions that represent logical steps in the Rx process so we can later update them more easily. Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mark Rustad 提交于
Update the driver version to reflect the new devices that it supports. Signed-off-by: NMark Rustad <mark.d.rustad@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Stephen Hemminger 提交于
Since dcbnl_ops is global, it should be prefixed by ixgbe_ Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tony Nguyen 提交于
Though not advertised through ethtool, if the link partner advertises a 2.5Gb or 5Gb connection, and the adapter supports it, allow the speed to be used. Signed-off-by: NTony Nguyen <anthony.l.nguyen@intel.com> Tested-by: NKrishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 12 2月, 2017 14 次提交
-
-
由 Henry Tieman 提交于
Ethtool support needs to save more PHY information. The added information includes FEC capabilities and 25G link types. Without this change it is possible to lose 25G or FEC settings by using ethtool. Change-ID: Ie42255b1e901ffbf9583b8c46466a54894114280 Signed-off-by: NHenry Tieman <henry.w.tieman@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Refactor how we add new filters to firmware to avoid a race condition that can occur due to removing filters from the hash temporarily. To understand the race condition, suppose that you have a number of MAC filters, but have not yet added any VLANs. Now, add two VLANs in rapid succession. A possible resulting flow would look something like the following: (1) lock hash for add VLAN (2) add the new MAC/VLAN combos for each current MAC filter (3) unlock hash (4) lock hash for filter sync (5) notice that we have a VLAN, so prepare to update all MAC filters with VLAN=-1 to be VLAN=0. (6) move NEW and REMOVE filters to temporary list (7) unlock hash (8) lock hash for add VLAN (9) add new MAC/VLAN combos. Notice that no MAC filters are currently in the hash list, so we don't add any VLANs <--- BUG! (10) unlock hash (11) sync the temporary lists to firmware (12) lock hash for post-sync (13) move the temporary elements back to the main list .... Because we take filters out of the main hash into temporary lists, we introduce a narrow window where it is possible that other callers to the list will not see some of the filters which were previously added but have not yet been finalized. This results in sometimes dropping VLAN additions, and could also result in failing to add a MAC address on the newly added VLAN. One obvious way to avoid this race condition would be to lock the entire firmware process. Unfortunately this does not work because adminq firmware commands take a mutex which results in a sleep while atomic BUG(). So, we can't use the simplest approach. An alternative approach is to simply not remove the filters from the hash list while adding. Instead, add an i40e_new_mac_filter structure which we will use to track added filters. This avoids the need to remove the filter from the hash list. We'll store a pointer to the original i40e_mac_filter, along with our own copy of the state. We won't update the state directly, so as to avoid race with other code that may modify the state while under the lock. We are safe to read f->macaddr and f->vlan since these only change in two locations. The first is on filter creation, which must have already occurred. The second is inside i40e_correct_vlan_filters which was previously run after creation of this object and can't be run again until after. Thus, we should be safe to read the MAC address and VLAN while outside the lock. We also aren't going to run into a use-after-free issue because the only place where we free filters is when they are marked FAILED or when we remove them inside the sync subtask. Since the subtask has its own critical flag to prevent duplicate runs, we know this won't happen. We also know that the only location to transition a filter from NEW to FAILED is inside the subtask also, so we aren't worried about that either. Use the wrapper i40e_new_mac_filter for additions, and once we've finalized the addition to firmware, we will update the filter state inside a lock, and then free the wrapper structure. In order to avoid a possible race condition with filter deletion, we won't update the original filter state unless it is still I40E_FILTER_NEW when we finish the firmware sync. This approach is more complex, but avoids race conditions related to filters being temporarily removed from the list. We do not need the same behavior for deletion because we always unconditionally removed the filters from the list regardless of the firmware status. Change-Id: I14b74bc2301f8e69433fbe77ebca532db20c5317 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Fix a bug where we modified the mac_filter_hash while outside a lock, when handling addition of broadcast filters. Normally, we add filters to firmware by batching the additions into lists and issuing 1 update for every few filters. Broadcast filters are handled differently, by instead setting the broadcast promiscuous mode flags. In order to make sure the 1<->1 mapping of filters in our addition array lined up with filters in the hlist tmp_add_list, we had to remove the filter and move it back to the main hash. However, we didn't do this under lock, which could cause consistency problems for the list. Fix this by updating i40e_update_filter_state logic so that it knows to avoid broadcast filters. This ensures that we don't have to remove the filter separately, and can put it back using the normal flow. Change-ID: Id288fade80b3e3a9a54b68cc249188cb95147518 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
The intent of this message was to indicate to a user that we might have missed a timestamp event for a valid packet. The original method of detecting the missed events relied on waiting until all 4 registers were filled. A recent commit d55458c0cd7a5 ("i40e: replace PTP Rx timestamp hang logic") replaced this logic with much better detection scheme that could detect a stalled Rx timestamp register even when other registers were still functional. The new logic means that a message will be displayed almost as soon as a timestamp for a dropped frame occurs. This new logic highlights that the hardware will attempt timestamp for frames which it later decides to drop. The most prominent example is when a multicast PTP frame is received on a multicast address that we are not subscribed to. Because the hardware initiates the Rx timestamp as soon as possible, it will latch an RXTIME register, but then drop the packet. This results in users being confused by the message as they are not expecting to see dropped timestamp messages unless their application also indicates that timestamps were missing. Resolve this by reducing the severity and frequency of the displayed message. We now only print the message if 3 or 4 of the RXTIME registers are stalled and get cleared within the same watchdog event. This ensures that the common case does not constantly display the message. Additionally, since the message is likely not as meaningful to most users, reduce the message to a dev_dbg instead of a dev_warn. Users can still get a count of the number of timestamps dropped by reading the ethtool statistics value, if necessary. Change-ID: I35494442226a444c418dfb4f91a3070d06c8435c Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Henry Tieman 提交于
Store the FEC status bits from the link up event into the hw_link_info structure. Change-ID: I9a7b256f6dfb0dce89c2f503075d0d383526832e Signed-off-by: NHenry Tieman <henry.w.tieman@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Sudheer Mogilappagari 提交于
Currently i40e_bus_info has PCI device and function info only and log messages print device number as bus number. Added field to provide bus number info and modified log statements to print bus, device and function information. Change-ID: I811617cee2714cc0d6bade8d369f57040990756f Signed-off-by: NSudheer Mogilappagari <sudheer.mogilappagari@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
The function i40e_client_prepare() can never return an error. So make it void and quit checking its return value. Change-ID: I9ff311e2324dde329eb68648efb2c94aaff856db Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bimmy Pujari 提交于
Signed-off-by: NBimmy Pujari <bimmy.pujari@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
The original comment implies that the only location where the raw_packet buffer will be freed is in i40e_clean_tx_ring() which is incorrect. In fact this isn't even the normal case. Update the comment explaining where the memory is freed. Change-ID: Ie0defc35ed1c3af183f81fdc60b6d783707a5595 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Scott Peterson 提交于
Reorganize the i40e_pull_tail() logic, doing it in i40e_add_rx_frag() where it's cheaper. The igb driver does this the same way. Also renames i40e_page_is_reserved() to reflect what it actually tests. Change-ID: Icd9cc507aae1fcdc02308b3a09034111b4c24071 Signed-off-by: NScott Peterson <scott.d.peterson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Scott Peterson 提交于
This patch reduces the size of struct i40e_rx_buffer by one pointer, and makes the i40e driver a little more consistent with the igb driver in terms of packets that span buffers. We do this by moving the skb field from struct i40e_rx_buffer to struct i40e_ring. We pass the skb we already have (or NULL if we don't) to i40e_fetch_rx_buffer(), which skips the skb allocation if we already have one for this packet. Change-ID: I4ad48a531844494ba0c5d8e1a62209a057f661b0 Signed-off-by: NScott Peterson <scott.d.peterson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Scott Peterson 提交于
On packet RX, we perform a DMA sync for CPU before passing the packet up. Here we limit that sync to the actual length of the incoming packet, rather than always syncing the entire buffer. Change-ID: I626aaf6c37275a8ce9e81efcaa773f327b331487 Signed-off-by: NScott Peterson <scott.d.peterson@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
The iWarp client cannot continue until this operation has been completed by the PF driver. Sleep (with timeout) until the reply from the PF driver has been received. Change-ID: I5dc41b857bba32d0218b7ce167b5da122dadf349 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
We can avoid the minor bit of work by calling check params after we check for the client instance, since we're about to return early in cases where we do not have a client. Change-ID: I56f8ea2ba48d4f571fa331c9ace50819a022fa1c Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 04 2月, 2017 2 次提交
-
-
由 Eric Dumazet 提交于
In linux-4.5, busy polling was implemented in core NAPI stack, meaning that all custom implementation can be removed from drivers. Not only we remove lot's of code, we also remove one lock operation in fast path, and allow GRO to do its job. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
In linux-4.5, busy polling was implemented in core NAPI stack, meaning that all custom implementation can be removed from drivers. Not only we remove lot's of code, we also remove one lock operation in fast path, and allow GRO to do its job. Signed-off-by: NEric Dumazet <edumazet@google.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by: NAlexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 03 2月, 2017 14 次提交
-
-
由 Alan Brady 提交于
Due to the resolution of the register controlling interrupt rate limiting, setting certain values for the interrupt rate limit make it appear as though the limiting is not completely accurate. The problem is that the interrupt rate limit is getting rounded down to the nearest multiple of 4. This patch fixes the problem by adding some feedback to the user as to the actual interrupt rate limit being used when it differs from the requested limit. Without this patch setting interrupt rate limits may appear to behave inaccurately. Change-ID: I3093cf3f2d437d35a4c4f4bb5af5ce1b85ab21b7 Signed-off-by: NAlan Brady <alan.brady@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alan Brady 提交于
This patch refactors the macro INTRL_USEC_TO_REG into a static inline function and fixes a couple subtle bugs caused by the macro. This patch fixes a bug which was caused by passing a bad register value to the firmware. If enabling interrupt rate limiting, a non-zero value for the rate limit must be used. Otherwise the firmware sets the interrupt rate limit to the maximum value. Due to the limited resolution of the register, attempting to set a value of 1, 2, or 3 would be rounded down to 0 and limiting was left enabled, causing unexpected behavior. This patch also fixes a possible bug in which using the macro itself can introduce unintended side-affects because the macro argument is used more than once in the macro definition (e.g. a variable post-increment argument would perform a double increment on the variable). Without this patch, attempting to set interrupt rate limits of 1, 2, or 3 results in unexpected behavior and future use of this macro could cause subtle bugs. Change-Id: I83ac842de0ca9c86761923d6e3a4d7b1b95f2b3f Signed-off-by: NAlan Brady <alan.brady@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
After refactoring the client open and close code, this is no longer needed. Remove it. Change-ID: If8e6e32baa354d857c2fd8b2f19404f1786011c4 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jayaprakash Shanmugam 提交于
Requirement for VFs to use the VMBus has been removed that's why removing Hyper-V VF device ID. Change-ID: I84f0964f443ee0db3e5e444b5ace996eb71b8280 Signed-off-by: NJayaprakash Shanmugam <jayaprakash.shanmugam@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Alexander Duyck 提交于
This patch does some quick work to pull some of the data off of the stack and hopefully start storing it in the Tx buffer info section of the Tx ring. Ideally we should be moving away from having to store much of anything on the stack and can just maintain it all in the descriptor rings. Change-ID: I4b4715ea1920e122502482b3f9e56a9a6cb1e9fe Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Tushar Dave 提交于
'struct i40e_dma_mem' defined with 'packed' directive causing kernel unaligned errors on sparc. e.g. i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 1.6.16-k i40e: Copyright (c) 2013 - 2014 Intel Corporation. Kernel unaligned access at TPC[44894c] dma_4v_alloc_coherent+0x1ac/0x300 Kernel unaligned access at TPC[44894c] dma_4v_alloc_coherent+0x1ac/0x300 Kernel unaligned access at TPC[44894c] dma_4v_alloc_coherent+0x1ac/0x300 Kernel unaligned access at TPC[44894c] dma_4v_alloc_coherent+0x1ac/0x300 Kernel unaligned access at TPC[44894c] dma_4v_alloc_coherent+0x1ac/0x300 i40e 0000:03:00.0: fw 5.1.40981 api 1.5 nvm 5.04 0x80002548 0.0.0 This can be fixed with get_unaligned/put_unaligned(). However no reference in driver shows that 'struct i40e_dma_mem' directly shoved into NIC hardware. But instead fields of the struct are being read and used for hardware. Therefore, __packed is unnecessary for 'struct i40e_dma_mem'. In addition, although 'struct i40e_virt_mem' doesn't cause any unaligned access, keeping it packed is unnecessary as well because of aforementioned reason. This change make 'struct i40e_dma_mem' and 'struct i40e_virt_mem' unpacked. Signed-off-by: NTushar Dave <tushar.n.dave@oracle.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Mitch Williams 提交于
This device ID was intended for use when running Linux VF drivers under Hyper-V, but we have determined that it is not necessary. Since it is unused, and will never be used, remove it. Change-ID: I74998ab4237db043cd400547bb54a0a5e2a37ea5 Signed-off-by: NMitch Williams <mitch.a.williams@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Bimmy Pujari 提交于
I40E_MAC_X710 was supposed to be for 10G and I40E_MAC_XL710 was supposed to be for 40G. But function i40e_is_mac_710 sets I40E_MAC_XL710 for all device IDS, I40E_MAC_X710 is not used at all. As there is nothing to compare there is no need for this function. Thus deprecating this extra macro and removing this function entirely and replacing it with a direct check. Change-ID: I7d1769954dccd574a290ac04adb836ebd156730e Signed-off-by: NBimmy Pujari <bimmy.pujari@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Instead of using i40e_add_filter or i40e_del_filter directly, when adding a MAC address, we should normally be using i40e_add_mac_filter or i40e_del_mac_filter. These functions correctly handle the various cases of VLAN mode or PVID settings. This ensures consistency and avoids the issues that can occur with the recent addition of a WARN_ON() in i40e_sync_vsi_filters. Change-ID: I7fe62db063391fdd1180b2d6a6a3c5ab4307eeee Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Use __i40e_del_filter instead of using i40e_del_filter() which will avoid doing an additional search to delete a filter we already have the pointer for. Change-ID: Iea5a7e3cafbf8c682ed9d3b6c69cf5ff53f44daf Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
These functions purpose is to add a new MAC filter correctly, whether we're using VLANs or not. Their goal is to ensure that all active VLANs get the new MAC filter. Rename them so that their intent is clear. They function correctly regardless of whether we have any active VLANs or only have I40E_VLAN_ANY filters. The new names convey how they function in a more clear manner. Change-ID: Iec1961f968c0223a7132724a74e26a665750b107 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
This function won't be appreciably slower when in VLAN mode, so there is no real reason to not just call it directly. In either case, we still must search the full table for a MAC/VLAN pair. We do get to stop searching a tiny bit early in the case of knowing we are not in VLAN mode, but this is a minor savings and we can avoid the code complexity by not having to worry about the check. Change-ID: I533412195b3a42f51cf629e3675dd5145aea8625 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Fold the check for determining when to call i40e_put_mac_in_vlan directly into the function so that we don't need to decide which function to use ahead of time. This allows us to just call i40e_put_mac_in_vlan directly without having to check ahead of time. Change-ID: Ifff526940748ac14b8418be5df5a149502eed137 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Jacob Keller 提交于
Now that we have the separate i40e_(add|rm)_vlan_all_mac functions, we should not be using the i40e_vsi_kill_vlan or i40e_vsi_add_vlan functions when PVID is set or when VID is less than 1. This allows us to remove some checks in i40e_vsi_add_vlan and ensures that callers which need to handle VID=0 or VID=-1 don't accidentally invoke the VLAN mode handling used to convert filters when entering VLAN mode. We also update the functions to take u16 instead of s16 as well since they no longer expect to be called with VID=I40E_VLAN_ANY. Change-ID: Ibddf44a8bb840dde8ceef2a4fdb92fd953b05a57 Signed-off-by: NJacob Keller <jacob.e.keller@intel.com> Tested-by: NAndrew Bowers <andrewx.bowers@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 31 1月, 2017 1 次提交
-
-
由 Eric Dumazet 提交于
napi_complete_done() allows to opt-in for gro_flush_timeout, added back in linux-3.19, commit 3b47d303 ("net: gro: add a per device gro flush timer") This allows for more efficient GRO aggregation without sacrifying latencies. Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-